matlab - Creating 3D model from cross-sectional images and normal vectors -


i'm trying map series of binary images 3d space based on pixel position q,p , image's location , normal vector. each image (829x829), have x,y,z position of center , dx,dy,dz across transducer (the normal vector out of image, values in mm). know physical diameter of images (30 mm), i'm able interpolate each pixel's position in reference known centerpoint. important note cannot measure rolling of image (rotation center), assume image oriented height axis in xy plane.

my current method create rotation matrix , local x,y,z each pixel of each frame, transform them multiplying matrix this:

for frame=1:numberofframes      %create unit normal vector      vz = [dx,dy,dz] / norm([dx,dy,dz]);      %define x unit vector upwards, y unit vector orthogonal both     vx = cross([0,1,0],vz);     vy = cross(vz,vx);      %create rotation matrix     r = [vx(1) vy(1) vz(1); vx(2) vy(2) vz(2); vx(3) vy(3) vz(3)];        z = zposition(frame);      q=1:height          x = xposition(frame) + (millimeters/pixel*(height/2 - q));          p=1:width              %counter n makes easier multiply r , re-assign values              (rather cycling through q , p again)             n = n + 1;              y = yposition(frame) + (millimeters/pixel*(width/2 - p));              %create position array correlates positions each pixel             positions(n,:) = [x,y,z];         end     end transformedpositions(:,1:3) = round(positions(:,1:3)*r); end 

i create 4d volume matrix indices transformedpositions , proper binary value. create model patch(isosurface(1:maxvaluey,1:maxvaluex,1:maxvaluez,volume));

my resulting models rotated seem flat, every image plane parallel , close together, when in reality images taken several cm apart. think rotation not orienting them differently, , pixel interpolation heavily dependent on image orientation(it should independent).

questions:

  • is matrix sensible? using euler's rotation matrix (with roll=0) gives me looking models different matrix.
  • is order of operations correct here, or should translate pixels , image last (i.e. add x,y,z of centerpoint or add in pixel location after transforming - have tried former , doesn't seem alter image)
  • is setting z values same each image proper method?

any advice appreciated! here's sample image, cross-section looking 360* around transducer in center:sample


Comments

Popular posts from this blog

python Tkinter Capturing keyboard events save as one single string -

android - InAppBilling registering BroadcastReceiver in AndroidManifest -

javascript - Z-index in d3.js -