Multi-cameras on same coordinate system

Hello folks,
First time poster here. I have a small project that I was hoping to use Matlab for and perhaps someone here can tell me if the computer vision toolbox is all I need, or if I can expect to write some custom code. I have 8 video feeds of a sporting event in my backyard. I have the same model cameras installed all around this small stage area in a circular fashion. I placed a rectangular checkerboard in the center (the day before), and all cameras could see it, and thus I have some calibration feeds. I want to register all of these cameras on a global coordinate system. That is to say, I need a way to know where every camera is relative to every other camera.
So far I've played around with the 'detectCheckerboardPoints' function, and it seems to work fine. But I have no real idea of where to start in terms of doing what I want. From some reading it seems that the computer vision toolbox is good at calibrating a stereo pair, but this is not my task. Is the vision toolbox able to solve my problem? I guess I need to know the rotation and translation of every camera in a global coord. system. Any guidance will be much appreciated.
Thanks,
-Scott

Respuestas (1)

Dima Lisin
Dima Lisin el 29 de Oct. de 2015

0 votos

Yes, you can use the Computer Vision System Toolbox for this. First, you have to calibrate each of your cameras individually, using the Stereo Camera Calibrator App. Then, since all the cameras can see the same checkerboard, you can use the extrinsics function to compute their respective rotations and translations. Then you can use the cameraMatrix function to get the camera projection matrices relating each camera to the common world coordinate system.

11 comentarios

Scott Binks
Scott Binks el 11 de Nov. de 2015
Hello Dima, many thanks for your help. I actually ended up trying something similar, and now I need your help again. I tried using the Camera Calibrator app. It showed me "extrinsics," i.e., a plot with all the cameras on it. This view seems correct to me (based on how I have all the cameras set up in my backyard). However, when I exported this using "Export Camera Parameters" and looked at the translations in "cameraParams.TranslationVectors," the numbers dont make sense to me. That is to say, they dont match the view I see in the app itself. For example, I expected all the translations to be relative to the center gray box with the red coordinate system arrows (as I saw in the view), but they seem to not match up.
As an example, please see the attached screenshot of just two cameras. The camera "1" seems to be at location ~(10,145) based on the plot, but in the output of cameraParams.TranslationVectors, I see something like (-324,92). What's going on? And more importantly, how do I fix this to get the desired result.
In short, how can I use this cameraParams output that I got from the cameraCalibrator to obtain translation/rotation values of the cameras relative to the center.
Thanks again for all your time and support!
Dima Lisin
Dima Lisin el 11 de Nov. de 2015
Hi Scott,
The thing to keep in mind here is that the extrinsics are the transformation from the world coordinates into the camera's coordinates. So if R and t are your extrinsics, then the location of the camera in the world coordinate system is -t*R', and its orientation is R'.
Scott Binks
Scott Binks el 11 de Nov. de 2015
Editada: Scott Binks el 12 de Nov. de 2015
Thanks for the quick response Dima. I'll try this out today and come back with a yay or nay. I think this might be exactly what I'm looking for.
As a quick follow on, I did this back in the OpenCV days, where we had an origin, up, and down vectors for each camera. I gather that -t*R' will give me the origin of a camera, but then what are the up and down vectors. Again, from analogy, I presume they are columns of R'. Would you suggest just permuting these until I get something sensible, or do you happen to know off hand what this could be?
EDIT: I'm also concerned that the output I got from the cameraCalibrator app, while having the right geometry, doesnt seem to have the right scale if you know what I mean. For example, I know how far apart a pair of cameras are (which is several feet), but the cameraCalibrator just reports it as being 1-2 feet. What am I doing wrong?
Thanks again.
Dima Lisin
Dima Lisin el 12 de Nov. de 2015
Hi Scott,
I don't know what up and down vectors mean. R' is the 3D rotation matrix that defines the orientation of the camera's image plane in the world coordinates.
I am a bit confused about how you are using the Camera Calibrator. If you have two cameras, then you can use the Stereo Camera Calibrator app to do stereo calibration, and get the rotation and translation between the cameras. If you have more than two cameras, then you should use Camera Calibrator to calibrate each camera individually, and then use the extrinsics function to compute their locations and orientations in a common coordinate system.
Scott Binks
Scott Binks el 12 de Nov. de 2015
Editada: Scott Binks el 13 de Nov. de 2015
Hi Dima,
I solved many of the issues I was having. Just a few follow on thoughts. Doesn't the cameraCalibrator app already return an object that contains a field called 'RotationMatrices' and 'TranslationVectors'? Why do I need to use the extrinsics function call?
Moreover, what exactly are the rotation matrices with respect to. I understand that they are the orientation of the camera image planes, but are they facing the (0,0,0) point (the checkerboard top left corner), which is defined as +x,+y,-z?
Thanks so much.
Dima Lisin
Dima Lisin el 13 de Nov. de 2015
Yes, the cameraParameters object has the rotation matrices and translation vectors. So if you calibrate all your cameras at the same time by taking pictures of the same checkerboard simultaneously, then you can use those.
But if you can also calibrate each of your cameras separately, using a unrelated sets of calibration images. Then you can set up the cameras, place a single checkerboard so that it is visible by all of them, and take a picture of it with all the cameras. Then you would use the extrinsics function.
The rotation matrices are with respect to the world coordinates, which are defined by the checkerboard. To be precise, the extrinsics (R, and t) are a transformation from the checkerboard's coordinate system into the camera's coordinate system. See the documentation for more details.
Scott Binks
Scott Binks el 13 de Nov. de 2015
That makes a lot of sense now, thank you. I actually used 8 images (one from each camera) that were all of the same checkerboard, but in slightly different locations on the stage (less than 1ft apart in each image). The output seems sensible, however one cannot know for sure. Do you think this is okay? If not, I'm still not sure what workflow I'm supposed to follow. I think you said use the camera calibrator for one camera at a time? How does that work?
Finally, I'm a bit confused about the rotation matrix R'. Which vectors are the x, y, and z. For example, is the first col x, the second col y, and the third col z? Or is it the rows.
Thanks for bearing with all my questions, I feel I am very close, and I really appreciate your support.
Dima Lisin
Dima Lisin el 13 de Nov. de 2015
You seem to be using the Stereo Camera Calibrator app, which calibrates stereo pairs of cameras. There is also a Camera Calibrator app, which calibrates a single camera. You can invoke it using the cameraCalibrator command.
What you should do is take about 20 images of the checkerboard with each camera, and calibrate them individually. Then arrange your cameras the way you need, place a checkerboard such as it is visible to all of them, and take a picture with each camera. Now the checkerboard will define a common coordinate system for all your cameras.
The rotation matrix does not contain x,y,z vectors. It rotates a 3D vector. rotatedVector = origVector * R. Where rotatedVector and origVector are 3D vectors of [x,y,z] coordinates. So you don't need to take the rotation matrix apart.
Scott Binks
Scott Binks el 13 de Nov. de 2015
Roger that. So the idea is that I take 20 images from each camera, pass it through cameraCalibrator individually and save out the cameraParams object for each camera. Then I set up all my cameras, and take a picture of a checkerboard visible to all cameras using each camera. Then I use the extrinsics function with this new image and the camera's corresponding cameraParams object to get the rotation and translation?
Regrading your the point about the rotation matrix, I want to know the rotation of each camera relative to a common coordinate system, because then I can use the rotation matrix for another coordinate system that I define for another application. If I know how to take the rotation matrix apart, I can then permute the right dimensions and/or flip their sign and get them into another common coordinate system.
Thanks!!
Scott Binks
Scott Binks el 15 de Nov. de 2015
Okay, I tried the suggestion. I took 10-15 images from each camera (with some translations and rotations of the checkerboard) and then used the 'estimateCameraParameters' function to get the calibration parameters for each camera.
I then put the checkerboard in the middle of the stage, and took a single shot from all the cameras without moving the checkerboard. I then found the points on each of the common checkerboard images and used the 'extrinsics' function with the calibrated parameters for each camera to get the t, R for each of these "common" frames.
My questions is that I still don't understand what point these rotations and translations are relative too. I know that the translation is -t*R' in world coordinates, and the orientation of the camera is R', but what is all of this relative to? The orientation (R') of the camera has to be relative to something. How can I verify this?
Thanks!
Dima Lisin
Dima Lisin el 16 de Nov. de 2015
It is relative to that checkerboard that you took a picture of with all your cameras. The checkerboard defines the world coordinate system. More specifically, the checkerboard defines the Z=0 plane. The X-axis goes to the right along the longer side of the checkerboard, the Y-axis goes down along the shorter side of the board, and the Z-axis points into the board. -t*R' is the camera's location in these world coordinates, and R' is the cameras orientation in those coordinates.
To verify this, you can plot checkerboard points that you get from the generateCheckerboardPoints function, and then plot the cameras using the plotCamera function.
By the way, it would have been easier to use the cameraCalbrator app than the estimateCameraParameters function for calibration.

Iniciar sesión para comentar.

Categorías

Más información sobre MATLAB Support Package for USB Webcams en Centro de ayuda y File Exchange.

Preguntada:

el 27 de Oct. de 2015

Comentada:

el 16 de Nov. de 2015

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by