Jump to content

2D-3D transformation


---
 Share

Recommended Posts

Hello everyone,

I use measurement images in my project, but  i could not be able to use the full height & width of the images due to the hardware constraints. After importing images back to system, when i try to orient the other image, it gives deviations like 0.3-0.2 pixel. I try to adjust the focal length but i could not be able to decrease the deviations below 0.1 pixel. Crop rate is different in width and height. Is there a way to adjust focal length or some other parameter to be able to orient image in a more precise way?

Moreover, I can access the GOM transformation matrix through scripting but since i change the width and height of the image, is there way to relate 2d coordinates with 3d coordinates throught scripting? In another way of saying, how can i access the transformation matrix, after orienting the other image with new height and width value?

Thanks,

Canset

 

Link to comment
Share on other sites

Hello Canset,

manually oriented ("other") images can be used for image mapping and mesh colorization, but are excluded from metrology functionality requiring high precision and accuracy.
For TRITOP, multiple images from a calibrated high-end, fixed-focus camera show multiple coded reference points and are bundle adjusted, which provides enough precise data for distortion correction.
This is not the case when importing images from any device and manually clicking 5 pairs of image and object points as reference within each image independently. The distortion of these images, which can be significant depending on the camera used, cannot be corrected with so little and imprecise data. In this case, the focal length is equivalent to 35-mm format, i.e. not the actual focal length of the lens, defines the field of view for the complete measurement series, should be provided by the camera specification and will not be valid for cropped images. The transformation matrix calculated by the manual orientation only contains the external orientation of the camera with 6 degrees of freedom.

What kind of original measurement images do you have, do they contain coded reference points and what hardware constraints are preventing their use? Maybe we can solve the problem by reducing the resolution, using a consistent downscaling factor instead of manual cropping. In general, it would be helpful to explain your overall use case.

Best regards,
Stefanos

Link to comment
Share on other sites

Hello Stefanos,

Basically I export ATOS measurement images which have height and width dimensions as (3008x4112) and cropped it to (800x800) to be able to feed into a deep learning framework. After model performs some visualization on images, I import  back the images as other image and orient it.

If I don't change the focal length and leave it as 35 mm-format,  deviations much bigger compared to focal length set to 100 or 150. I want user to be able to see the marked regions on 3D color mesh  in a more precise way that's why I want to ask if there should be some other adjustments that i could make to decrease deviations. Cropping is required for my model due to the limited GPU capability. 

After performing  orientation and projection, I also want to mark the area on color mesh by creating points, boundaries by some fitting elements on 3D surface and calculate length& width. I have projection angles and projection center info after orienting other image. How should i use this info to relate pixel coordinates to 3D coordinates? I try to create the camera coordinate system to simulate the orientation of measurement images, but i could not be able to relate them.

Many thanks,

Canset

Link to comment
Share on other sites

Hello Canset,

as I mentioned, orienting "other" images with 5 image/object point pairs does not provide enough information to effectively undistort the image. Even if the user provides the correct "focal length (equivalent to 35mm format)" and clicks these point pairs with perfect precision, its mapping will still not be as precise as of ATOS/TRITOP measurements, which are adjusted as a bundle with scan and/or reference point information.

If you accept that and still want to crop a part of the image and process it in another software, you should copy and paste the result back into the original image before reimporting it as an "other" image. Then use the "focal length (equivalent to 35mm format)" of the camera that took the original image - which is not its actual "focal length".

If the original image is an ATOS measurement, and a is the Camera angle (under Calibration Properties), then the "focal length (equivalent to 35mm format)" is 12/tan(a*π/360).

Your workflow is very specific and outside the scope of (precise) metrology, thus not directly supported.
Your best bet would be to use an original ATOS/TRITOP measurement with 5+ coded reference points, replace a cropped part with your processed image and import that as an ATOS/TRITOP measurement along with the rest - not as "other" image.

Best regards,
Stefanos

Link to comment
Share on other sites

Hello Stafenos,

Thanks for your guidance. I think i can try to paste the prossed part into the  original image. Yes in my case, the original image is the ATOS measurement image.It makes much more sense to use original image to create 3D surface.

I can also access the camera angle through scripting, i will  use the formulation you provided.

Many thanks,

Canset

Link to comment
Share on other sites

 Share

×
×
  • Create New...