December 19, 2012
February 22, 2012
1. Get X Virtual FrameBuffer
sudo apt-get install xvfb
2. Launch ImageJ (“cd” to the ij.jar directory):
Xvfb :15 &
DISPLAY=:15 java -Xmx12288m -jar ij.jar -run "TestIJ Plugin"
- TestIJ Plugin is the name of the compiled plugin in the ImageJ menu. No need to specify a subfolder.
- :15 is an example.
Links that helped:
October 21, 2011
It is available for downloading here.
October 12, 2011
|We have finally received the parts for Elphel-Eyesis-4π camera and started assembling them hoping that all will fit together as we planned. And for the most part they do, which seems a bit like magic to us: you design the camera on the computer in a 3D CAD program, make a long list of parts it will consist of and then a couple months later it all turns into physical object, not just a virtual 3D design.
Of course some of the parts will need minor modifications – some are due to mistakes made by us, and some are manufacturing problems. But none of them were significant enough to prevent us from assembling the first 3 prototypes, that will be 100% operational spherical panorama cameras. Elphel-Eyesis-4π is the second generation Panoramic Imaging System by Elphel Inc. It is able to capture high-resolution images in full 360 degrees and create 4π (in steridians) spherical panoramas at a high frame rate. The actual recording device consists of a weatherproof camera head that contains the image sensor front-ends and lenses in spherical distribution to cover the entire 360 degree area. The rest of the electronic components as well as the SSDs for data storage are contained inside the camera pole.
Elphel-Eyesis-4π covers a full sphere. 24 sensors (8 in horizontal array, 8 pointing at +30 to + 90 degree (zenith) and 8 pointing at -30 to -90 degree (nadir) ensure a uniform high resolution distribution over the entire covered area. A new Internal Measurement Unit (IMU) mounted at the top of the camera pole provides high resolution 3D position and orientation of the camera.
September 16, 2011
We are proud to add a new product and camera KIT to the Elphel portfolio. See the pricelist.
The NC353L-369-IMU/GPS is a new camera configuration with an Inertial Measurement Unit (IMU) and optional GPS receiver. In addition to storing the geographical coordinates with each captured image in a video stream this allows to also save 3D orientation (yaw, pitch and roll) and 3D acceleration (Six Degrees of Freedom Inertial Sensor) of the camera at the moment of capturing an image at very high precision (2400 samples/second). A detailed description can be found in the previous post.
June 15, 2011
This April we attached Eyesis camera to a backpack and took it to the Southern Utah. Unfortunately I did not finish the IMU hardware then so we could rely only on a GPS for tagging the imagery. GPS alone can work when the camera is on the car, but with a camera moving at pedestrian speed (images were taken 1-1.5 meters apart) it was insufficient even in the open areas with a clear view of the sky. Additionally, camera orientation was changing much more than when it is attached to a car that moves (normally) in the direction the wheels can roll. Before moving forward with the IMU data processing we decided to try to orient/place some of the imagery we took manually – just by looking through the panoramas, adjusting the camera heading/tilt/roll to make them look level and oriented so the next camera location matches the map. Just tweaking the KML file with the text editor seemed impractical to adjust hundreds of panoramic images so we decided to add more functionality to our simple WebGL panorama viewer, make it suitable for both walking through the panorama sets and for the editing orientations and locations of the camera spots. (more…)
March 30, 2011
UPD: Google has updated their Salt Lake City images in their StreetView – so, the comparison is better now.
Works in Firefox 4.0 and Chrome. Does NOT work in Firefox 3.6.X.
The above sample shows how custom panoramas (in our case: a panorama we shot with our Elphel Eyesis) can be integrated in a custom StreetView.
Elphel Eyesis images is the top-left window. The top-right window is the current (lower resolution) official Google Street View in Salt Lake City supposedly made by the 2nd or 3rd generation of GSV cameras.
The Open Street Map is for switching between the available panorama points and the Google Map with a pegman is filling the otherwise unused space at the same time generously showing the view direction.
March 11, 2011
Both, Google Maps API and Open Layers API, are quite simple, though it can take some time to find a perfect example. The maps are added to the WebGL panorama view test page (read “Experimenting with WebGL panoramas”)
December 2, 2009
Rewrote some blocks of the code for the 10359′s fpga and at last found out what was the problem in the alternation mode with buffering – it was the delay between the frames that were sent to the 10353 – it was too small. I used a counter earlier for that but it had 16 bits and that wasn’t enough. I extended it to 32 bits and succeeded with the delay about 220 clk tacts (220x~10ns = ~ 10ms – this is a bit much but I’m not sure – probably missing something in frame generation).
Andreas has recently advised to add a mode where the alternating frames are combined into one to make it easier to say which frame belongs to which sensor. I coded the first version with simple buffering but didn’t test much. Some notes:
- 2 frames are combined vertically.
- the initial (resulting frame) resolution is set from the camera interface (camvc).
- the sensors are programmed to half vertical size and the camvc doesn’t know about it.
I also updated the 10359 interface to switch between these modes and to change other settings.
It is worth mentioning that after ‘automatic phases adjustment’ from 10359′s interface sensors have different color gains in their registers. So, there’s a need to reprogram this parameters after phase adjustments.
What we’ve got now working in the 10359 is:
1. Alternation mode with (or without) buffering in 10359′s DDR SDRAM:
2. Alternation mode with combined buffered frames:
1. Make sensors programmed identically after phase adjusment.
2. Add stereo module
September 29, 2009
The modes are:
- Direct alternating channels mode.
At first, I rewrote the logic of switching from what I already had and this resulted in parsedit.php “Error 500″ and the streamer stop (when the sensors were in the free run mode, in triggered – everything was ok) while it was everything fine with the testbench. Coudn’t find what is wrong for sometime. Part of the logic was based on sync signals levels and under certain conditions the switching didn’t work – the error was corrected by using only the sync signals edges.
Fig.1 A sample frame in the testbench is 2592×3 (to reduce verification time), “ivact” – vertical sync signal, “ihact” – horizontal sync signal, “pxdr” – pixel data. White numbers in ‘ivact’ line represent an appropriate channel.
- Alternating channels – one channel direct, another – bufferred.
With this mode the situation was almost the same as with the previous mode – the same changes but the frame from the second channel, the one that is buffered looks brighter, but if to disable the first channel – the buffered frame is correct:
Fig.2 “framen” – frame enable register, allows work when ‘high’.
Fig.3 Good direct frame (left) and ‘whitened’ buffered frame (right). Probably some signal’s latencies are incorrect.
- Depth frame mode.
Is separate due to the problems with other two modes. It’s being added currently.
I’m also making the project less messy, optimizing registers addresses and rewriting scripts to make the work with 10359 easier.