[04-MAY-18] The Animal Cage Camera (ACC) provides a camera for monitoring small animals in cages. The A3034A is a rectangular circuit board that spans the width of a mouse cage, supported by four legs on its corners. The A3034A camera is at the center of the board, looking down into the cage, and equipped with a fish-eye lens. Around the edge of the circuit board are twelve white LEDs and twelve infra-red LEDs that provide variable illumination. An embedded computer on the circuit board provides continuous upload of compressed video over TCPIP, as well as illumination control over TCPIP. The entire circuit runs on a single 24-V power input, and communicates through one RJ-45 socket over a local area network.
The A3034 is designed to operate in a faraday enclosure. Here is the A3034X installed over an Animal Location Tracker (A3032) in an FE2F faraday enclosure, with a plastic tub representing a mouse cage.
In the photograph above, the A3034X is supplied with power with the help of a power feedthrough at the back of the enclosure with a black power cable. Its Ethernet connection enters through an RJ-45 feedthrough with a blue cable. The red cable is the ALT (Animal Location Tracker) platform's LWDAQ root cable. This cable passes through an RJ-45 feedthrough in the back of the faraday enclosure and then to a LWDAQ Driver (A2071E). The gray cable is the logic cable that connects the ALT to an Octal Data Receiver (A3027). The ALT gets it power from its LWDAQ cable. On either side of the platform are two Loop Antennas (A3015C). Their coaxial cables leave through BNC feedthroughs at the back of the enclosure, and so connect to the A3027 antenna inputs.
The A3034X camera uses a 1640 × 1232 pixel CMOS image sensor manufactured by Sony Semiconductor and a high-resolution wide-angle lens. The photograph above shows the DSL215, which provides a 120° field of view. We also equip the A3034 with the DSL219 lense, which provides almost as wide a field of view, but contains its own distortion-canceling optics. Both lenses can view an entire mouse cage when just above the rim of the cage base.
The Videoarchiver Tool in our LWDAQ_VBT software provides live display or time-synchronized recording with delayed display. When recording, we read MJPEG videos out of the camera and write them to disk. The MJPEG files allow us to synchronize the names of the video files with the system clock to within one frame period, which is 50 ms. We then compress the video and add it to a larger compressed video we are accumulating in a user-defined recording directory.
[13-APR-18] The A3034 requires its own 24-V power supply. We use the same power adaptor we provide with the LWDAQ Driver (A2071E) and Command Transmitter (A3029C). The power socket is marked +24V on the printed circuit board. The power plug is 5.5 mm outer diameter and 2.1 mm inner diameter, with the inner contact positive.
We bring this 24-V power into a faraday enclosure with a power jack bulkhead connector. The power socket of the connector faces outward and receives the plug of the power adaptor. Within the enclosure, the cable attached to the bulkhead connector plugs into the A3034.
The A3034 will operate at full performance for input voltages 18-24 V. For a 24 V input, when streaming live, compressed video with ambient lighting, the A3034 consumes 100 mA. When we turn on the visible LEDs to full power, the current consumption increases by 30 mA. When we turn on the infrared LEDs, the current consumption increase by 40 mA.
[23-APR-18] The diagram below shows how one Animal Cage Camera (A3034X) and one Animal Location Tracker (ALT, A3032C) are powered and controlled for animal tracking. The animal cage goes on the ALT platform, with the camera on top. You will need an Ethernet Hub with at least three sockets so that your computer can communicate with the camera and a LWDAQ Driver at the same time.
The power for the camera enters the Faraday Enclosure via its own feedthrough. The feedthrough has a two-wire power cable soldered permanently to its inner contacts. We plug the far end of this cable into the camera.
The A3034X has a fixed IP address: 10.0.0.234. Our LWDAQ Drivers have a default IP address 10.0.0.37. Set up your computer to use its wired Ethernet connection to communicate with the 10.0.0 subnet. Consult the Configurator Manual for instructions on setting up communication with a solitary LWDAQ Driver. Once you have communication with the LWDAQ Driver, you can unplug the driver and plug in the camera in its place.
When you connect power to the camera, it boots up. The visible and infrared LEDs turn on, although you can see only the visible LEDs with your eye. Wait for one minute until the LEDs turn off. Now attempt to communicate with the camera. Open the Videoarchiver Tool in the LWDAQ Program. Before you use the Videoarchiver for the first time, you must download and install the Videoarchiver Libraries. Try one of the On and Off buttons for he visible LEDs. If the camera responds, your connection is working.
[11-MAY-18] The A303401A circuit board requires that we displace L1 and add wire links to accommodate an inverted footprint. The result is our A3034X with half and full-power visible illumination and full-power infrared illumination.
For the A303401B we propose to add a bridge rectifier to protect the circuit from reversal of its 24V power supply. We will populate the four-bit DACs to give full control of brightness of both LED arrays. We must add a couple of 0-V pads for scope probes, and test points for all signals. Move vias at least 25 mils from pads.
[25-MAR-18] We assemble the first prototype A3034A with circuit board A303401A. At power-up, the white LEDs shine brightly for less than a second, then go out. Two resistors over heat and burn. We find that U1, the current mirror, is suffering from thermal run-away. Our original circuit runs 2 mA through U1-6 at maximum brightness and expects 2 mA to flow through U1-3 as well. The two transistors are in the same SOT-323 and our assumption was they would remain at the same temperature. If U1-3 heats up, we expect U1-6 to heat up too, dropping its base-emitter voltage, and so controlling the current through U1-6. The base-emitter voltage drop for a given collector current decreases with temperature by roughly 2.4 mV/°C, as we show for diodes here. We find that for currents larger than 500 μA into U1-6, the current through U1-3 increases during the first few seconds. The LEDs turn off because U1-3 drops below the minimum 18 V required to provide current to the LEDs through Q2 and Q3. Instead, current flows through the base junctions of Q2 and Q3 into U1-3. When we have 20 mA flowing through U1-3 and 10 V, current dissipation in U1-3 is 200 mW, which exceeds the maximum for the UMX1 dual transistor.
[26-MAR-18] We change R1-R4 to 100 kΩ, 50 kΩ, 27 kΩ, and 14 kΩ respectively. At full brightness we have 400 μA flowing through U1-6. We remove Q3. We are powering only D1-D6. We have R6 = 270 Ω and R5 = 2.2 kΩ. We observe 1.7 V across R5, which implies 770 μA through U1-3. We do not have thermal run-away, but we have the U1-3 current is twice that of U1-6, which implies that the junction of U1-3 is around 7°C hotter than that of U1-6 (VT Ln(770/400) ÷ 2 mV/°C ≈ 7 °C). The voltage across R6 we assume is around 1.7 V also so we have 6.3 mA flowing through the white LEDs. If Q2, a ZXTP2025F has typical current gain of 380, we expect base current Q2-3 to be 20 μA ≪ 770 μA. We could decrease R6 to 100 Ω and so increase the LED current to 10 mA. The power dissipation in R6 will then be 10 mW, which is fine.
[27-MAR-18] We have R1-R4 all driven by the same 3.3 V and their resistance in parallel is 7.2 kΩ. The voltage across them all is 2.7 V for a current of 370 μA. Base-emitter voltage drop is 0.61 V. We have R6 = 100 Ω. Voltage across R5 = 2.2 kΩ is 1.5 V for 680 μA. Voltage across R6 is 1.4 V for 14 mA. We remove R6 and still see 1.5 V across R5, suggesting the base current drawn by Q2 is negligible.
We replace R1-R4 with a single 18 kΩ and see 2.8 V across it for 150 μA. We have R5 = 5 kΩ and 0.8 V across it, so 160 μA. The voltage across R6 = 100 Ω is also 0.8 V for 8 mA into the LEDs. We load Q3, D7-D12, and R7 = 100 Ω and see 8 mA flowing into the new diodes. The voltage drop across both chains of LEDs is 16 V for average forward drop of 2.7 V per diode. We load 5 kΩ for R15 and 18 kΩ for R10, to which we connect 3.3 V. We load D15-D26. We get 8 mA through the twelve infra-red diodes. Pin Q5-1 is at 15 V, making the average forward drop of the diodes 1.25 V.
We test the visible and infra-red illumination for image-taking in a cage. The visible illumination is bright. Our visible LED is the white L130-2780 of the Luxeon 3014 series. It is a 2700K warm white emitter in a P1206 package. The infra-red illumination is too dim for us to obtain a blob image of a toy mouse. The infra-red is the XZTHI54W. It is an 880-nm emitter in a P0805 package. According to its data sheet, this LED should emit a minimum of 2π × 0.8 mW = 5 mW of infra-red light at 20 mA forward current, or 2 mW at 8 mA. We drop R14 from 100 Ω to 27 Ω and R10 from 18 kΩ to 10 kΩ. We now see 1.0 V across R14, so 37 mA flowing through LEDs. The LED forward voltage is now 1.38 V. We put an SD445 photodiode up agains one of the LEDs and get 3.4 mA ÷ 0.6 mA/mW = 5.7 mW of infra-red light for an input power of 50 mW, or 11%. We drop the current to 10 mA and see 1.9 mW or 13%. We try an HSDL-4400 with 37 mA and get 4.1 mW. We restore the original LED. Our white LEDs at 8 mA give us photocurrent 2.8 mA. Assuming an average wavelength of 500 nm this is 11 mW. The electrical input power is 100 mW, so efficiency is around 11%.
[04-APR-18] We choose new DAC resistor values R4 = 40.2 kΩ up to R1 = 316 kΩ for the visible light control and R13 = 20.0 kΩ up to R10 = 160 kΩ for the infra red light control. Assuming the U1 and U2 base-emitter drop is around 0.6 V and the logic HI is around 3.3 V, we expect the following control currents versus DAC count.
Assuming that the control currents are mirrored exactly by U1 and U3, we calculate the visible and infra-red LED current versus DAC count. We have R5 = R15 = 4.7 kΩ, R6 = R7 = 100 Ω, and R14 = 27 Ω.
The maximum forward current of our infra-red LED is 50 mA. We expect to be just under the maximum at 44 mA for full brightness. The maximum current through the white LED is 120 mA and our maximum current is 6 mA. We remove our photodiodes D13 and D14 and replace them with phototransistors one hundred times more sensitive to light, and set R8 = R9 = 20 kΩ.
[13-APR-18] We have two A3034X, W0381 and W0382. We are shipping W0381 to ION along with ALT V0385. The Raspberry Pi username is email@example.com and password is "osicamera".
In the figure above we see two Ethernet cables and an RJ-45 feedthrough to carry the Ethernet connection from a local area network hub to the A3034X. The power adaptor is in a white box, and its bulkhead connector is in a bag. We have standoffs to raise and lower the camera, cable ties to fasten the Ethernet cable to the circuit board, extra flex cables for the camera connection, and wider-angle lens for use with the camera.
[24-APR-18] We have two videos of a cell phone clock, one 14-s long, the other 100-s long. We compress both with all eight ffmpeg compression speed settings, which we activate with options like "-preset veryslow". We leave the image quality at its default value, which we specify with "-crf 23".
We are surprised to see that veryfast gives the smallest file. We try a 30-s video half of which is partly taken up with our hands moving and adjusting the phone under the camera. We use a script stored in Scripts.tcl.
We make a 30-s video in which our hands are moving the phone continuously, with a diagram as a background, and repeat our measurement.
We pick "verfast" as our preset value. It's three times faster than the default, and the files are the same size or smaller. We expect the maximum size of the compressed videos to be around 150 kBytes/s when many objets are moving quickly, and the minimum size to be 10 kBytes/s when nothing is moving.
[08-MAY-18] We consolidate all scripts into a single directory. We make all ffmpeg and mplayer calls directly from Tcl. A watchdog process, defined in Tcl, runs independently and monitors the segment directory. If the ffmpeg segmentation process is abandoned by the Videoarchiver, the watchdog will terminate the segmenter when there are more than a maximum number of files in the segment directory. We record for fifteen minutes on MacOS and obtain fifteen 20 fps, H264 video files each exactly one minute long, each beginning with our cell phone clock at 01 seconds. We do the same thing on Windows, but the file vary in length from 55 s to 65 s. In one example, ffprobe tells us that the video length is 64.1 s, there are 1282 frames, and the frame rate is 20 fps. We combine 9 such videos together to form one of 542.85 s (9:03) duration and 10857 frames at 20 fps. The time on our phone clock is 8:43:12 at the start and 8:52:14 at the end.
With nothing moving in the field of view, our compressed 1-s video segments are 56 kBytes long, with our set-up diagram as a background. With the phone clock in view, they are 62 kBytes long. With our hands spinning the phone the files are 250 kBytes.
To help with off-line development of the Videoarchvier Tool, we implement a virtual video feed in the Videoarchiver that we can activate with virtual_video. The feed reads a video file in the Virtual directory once and streams tham to a local TCP port. We use the ffmpeg -re option to request that the input file be streamed at 20 fps, but it appears that this frame rate is not enforced. The files we record with the virtual feed are marked as having twenty frames per second, but they are stretched out in time. A one-minute video of 20 fps loses the first three seconds and lasts for 64 s.
[08-MAY-18] We compress a five-second movie of five white rats moving around in a cage. With the veryfast algorithm, the file is 1.3 MBytes. With veryslow it is 1.2 MBytes. We can crop a video stream with ffmpeg, and extract sections of a video as well. The following command extracts the interval from time 00:00 to 01:17 and preserves only the rectangle with top-left corner x=0, y=100, width 720 and height 900 (pixels).
ffmpeg -i V1.mp4 -ss 00:00:00 -t 00:01:17 -filter:v "crop=720:900:0:100" V1_cut.mp4
In mplayer we can jump to a particular time in a video with:
mplayer -ss 01:30 Blob_Track.mp4
These features will permit us to navigate through video files to particular locations to match EEG recordings. We can operate mplayer in slave mode to have it play video files that do not exist at the time we open the player window.
mplayer -slave -quiet -idle -input nodefault-bindings -noconfig all loadfile V1525694641.mp4 pausing seek 30 0
We start mplayer in slave mode and tell it to idle when it's done playing a video. We also override all video screen key bindings so the user cannot quit, pause, or otherwise divert the playback with the mouse and keyboard in the video window. We deliver commands via stdin in this example (the keyboard). We load a video file, then seek absolute time 30 s and pause.
[10-MAY-18] We have nine recorded files each nominallly 600 s long. Their names all end with 488. According to ffprobe, the frame rate is 20 fps. Eight have either 12004 or 12005 frames and one has 11981 frames, for a total of 108016, an average of 12001.8 frames in each 600-s video.
[07-JUN-18] At ION/UCL we record clear and synchronous video from an A3034X and A3032C of one mouse in a cage.