I assume that everybody will at least have heard about the Kinect from Microsoft. It is a mainly a camera that sees depth and color. In addition, it has a whole image interpretation library that is used to detect gestures and poses from people standing in front of it. However, a use that seems more directly applicable to architects and designers is 3D scanning.
While there are more accurate methods, especially using terrestrial laserscanning that captures millions of points accurately and fast and photogrammetry, that uses plain images, but requires more extensive 3D reconstruction algorithms, the kinect is very attractive, as it is cheap (around $100-150 standalone), is working on "any" computer, partly due to the Open Source drivers that have become available, but also more recently by the Microsoft SDK, although this only works on Windows.
That said, the majority of architects and designers are not really into coding. Luckily, there are some free and cheap ready-made solutions.
I connected the Kinect to an USB port, ran the software and all worked out-of-the-box (at least on OSX).
The interface is basic and rudimentary: you see the depth image, the RGB camera image and a point-cloud 3D view. You set a main accuracy resolution (e.g. 10 mm) and press start. The Kinect starts capturing the points (with added color) and when you move the Kinect around slowly, taking into account that you get enough overlap the measured points are added to the whole scene.
When you are ready, you can stop and export the results into a PLY file. You can open this e.g. with the Open Source Meshlab or Blender applications, to further edit and export them. The process is fairly simple for the user, but beware that you get huge files. E.g. the scan of the example I show here, is about 420 MB. You also have to take the limited depth into account, so you might have to wander around and carry the laptop with you to get close enough.
When you have some reference measurements, you can then use this scan as a quite detailed and accurate reference model, although translating this into optimized polygonal meshes or even BIM models is a whole other endeavor.
While there are more accurate methods, especially using terrestrial laserscanning that captures millions of points accurately and fast and photogrammetry, that uses plain images, but requires more extensive 3D reconstruction algorithms, the kinect is very attractive, as it is cheap (around $100-150 standalone), is working on "any" computer, partly due to the Open Source drivers that have become available, but also more recently by the Microsoft SDK, although this only works on Windows.
That said, the majority of architects and designers are not really into coding. Luckily, there are some free and cheap ready-made solutions.
ReconstructMe
For Windows, you can try ReconstructMe, which is an all-in-one program, that has a non-commercial version for, well, non-commercial work (e.g. students).Skanect
As an alternative, there is also Skanect, which is available for Windows and OSX. It is not Open Source, but uses several Open Source libraries and is free to use (for now?).I connected the Kinect to an USB port, ran the software and all worked out-of-the-box (at least on OSX).
The interface is basic and rudimentary: you see the depth image, the RGB camera image and a point-cloud 3D view. You set a main accuracy resolution (e.g. 10 mm) and press start. The Kinect starts capturing the points (with added color) and when you move the Kinect around slowly, taking into account that you get enough overlap the measured points are added to the whole scene.
Scanect with the view on my office (about 860.000 points) |
PLY file loaded into MeshLab (some clipping occurs) |
thank you. what is the range of scanning ? can we use for atleast 5 to 6 meters distance? if the area of scanning is too large lts say a complete building, can it process so much large data?
ReplyDeleteIt's hard to get any accuracy beyond about 4 meters... as seen in the screenshots. You have to walk around quite a bit and then the pointcloud becomes quite heavy to process in one go.
ReplyDeleteFor complete buildings, this isn't the most practical setup. A full laser-scanning solution is more optimized, but also requires several registration positions that have to be referenced and synchronized afterwards in the software.
Maybe the photography-based "123D Catch" app from Autodesk can be used instead?
Hello stefan,
ReplyDeleteis it also possible to use other scanning devices for skanect and reconstruct me? fe: asus xtion live pro? (this is what i am using at the moment with the faro scenect) and has the big advantage that it is powered via usb.
I would like to compare it with other software like skanect.
thanks,
thomas
I have no access to other scanning devices so don't know the answer. But I'm reasonably sure that the "kinect hacks" that are used by most frameworks are quite specific for this device. Maybe if there would be a generic scanner API or wrapper, which could then be applied inside applications.
ReplyDeleteWell, these software are very handy to every architect out there since they can easily visualize their blueprint in a more comprehensible way. This will help them see if there is something wrong or lacking with their design plan. This will surely help them to improve on their drafts.
ReplyDeleteI found another one : http://www.matherix.com/homepage.html
ReplyDeleteThanks for the Info here
Thank you. Matherix 3Dify is currently in private beta, but you can apply for the next round of testing. It is only for Windows, so it seems.
ReplyDeleteI would like to scan an entire car. What would be a good option?
ReplyDeleteI assume that the Kinect would be rather un-precise. The total size of a car makes it not too large for a Kinect, though. When you plan on reconstructing the car body with actual surfaces, you could use the rough point-cloud as an underlay. However, the rough point-cloud would not be usable to generate clean and smooth surfaces. In fact, in architecture this is a little less problematic as we can approximate most surfaces with flat faces anyway.
ReplyDeleteIs there a program I can use to scan a hard copy of my floor plan so that I can alter it on my pc?
ReplyDeleteWhile there are automated tools to convert a (scanned) drawing into e.g. DWG or vectorial PDF, you need to have a good scan to begin with (straightened and undistorted preferably). I often use scanned images as underlay inside my BIM software to reconstruct the model. Since most of these scans originate from CAD drawings anyway, they are fairly accurate and the dimensions can be used during reconstruction.
ReplyDeleteWhen tracing old hand-drawn plans and elevations, underlays properly rotated and scaled often display inconsistencies which require a lot of interpretation. That is not possible to automate.
I would not use a Kinect at all for that. I found out that quick smartphone images, while distorted, often allow a very quick scan + import into the BIM software, but obviously, due to short lenses, with quite some distortion. I seldom use a scanner these days.
You can search for "scan to dwg" programs, if you need to trace images to vectors. Gives more usable results than regular tracing tools, like those inside Illustrator or CorelDRAW. But I did not use any of them.
ReplyDelete