Skip to main content

3D scanning using Kinect and free software

I assume that everybody will at least have heard about the Kinect from Microsoft. It is a mainly a camera that sees depth and color. In addition, it has a whole image interpretation library that is used to detect gestures and poses from people standing in front of it. However, a use that seems more directly applicable to architects and designers is 3D scanning.

While there are more accurate methods, especially using terrestrial laserscanning that captures millions of points accurately and fast and photogrammetry, that uses plain images, but requires more extensive 3D reconstruction algorithms, the kinect is very attractive, as it is cheap (around $100-150 standalone), is working on "any" computer, partly due to the Open Source drivers that have become available, but also more recently by the Microsoft SDK, although this only works on Windows.

That said, the majority of architects and designers are not really into coding. Luckily, there are some free and cheap ready-made solutions.

ReconstructMe

For Windows, you can try ReconstructMe, which is an all-in-one program, that has a non-commercial version for, well, non-commercial work (e.g. students).

Skanect

As an alternative, there is also Skanect, which is available for Windows and OSX. It is not Open Source, but uses several Open Source libraries and is free to use (for now?).

I connected the Kinect to an USB port, ran the software and all worked out-of-the-box (at least on OSX).

The interface is basic and rudimentary: you see the depth image, the RGB camera image and a point-cloud 3D view. You set a main accuracy resolution (e.g. 10 mm) and press start. The Kinect starts capturing the points (with added color) and when you move the Kinect around slowly, taking into account that you get enough overlap the measured points are added to the whole scene.

Scanect with the view on my office (about 860.000 points)
When you are ready, you can stop and export the results into a PLY file. You can open this e.g. with the Open Source Meshlab or Blender applications, to further edit and export them. The process is fairly simple for the user, but beware that you get huge files. E.g. the scan of the example I show here, is about 420 MB. You also have to take the limited depth into account, so you might have to wander around and carry the laptop with you to get close enough.

PLY file loaded into MeshLab (some clipping occurs)
When you have some reference measurements, you can then use this scan as a quite detailed and accurate reference model, although translating this into optimized polygonal meshes or even BIM models is a whole other endeavor.

Comments

  1. thank you. what is the range of scanning ? can we use for atleast 5 to 6 meters distance? if the area of scanning is too large lts say a complete building, can it process so much large data?

    ReplyDelete
  2. It's hard to get any accuracy beyond about 4 meters... as seen in the screenshots. You have to walk around quite a bit and then the pointcloud becomes quite heavy to process in one go.

    For complete buildings, this isn't the most practical setup. A full laser-scanning solution is more optimized, but also requires several registration positions that have to be referenced and synchronized afterwards in the software.

    Maybe the photography-based "123D Catch" app from Autodesk can be used instead?

    ReplyDelete
  3. Hello stefan,

    is it also possible to use other scanning devices for skanect and reconstruct me? fe: asus xtion live pro? (this is what i am using at the moment with the faro scenect) and has the big advantage that it is powered via usb.
    I would like to compare it with other software like skanect.

    thanks,

    thomas

    ReplyDelete
  4. I have no access to other scanning devices so don't know the answer. But I'm reasonably sure that the "kinect hacks" that are used by most frameworks are quite specific for this device. Maybe if there would be a generic scanner API or wrapper, which could then be applied inside applications.

    ReplyDelete
  5. Well, these software are very handy to every architect out there since they can easily visualize their blueprint in a more comprehensible way. This will help them see if there is something wrong or lacking with their design plan. This will surely help them to improve on their drafts.

    ReplyDelete
  6. I found another one : http://www.matherix.com/homepage.html
    Thanks for the Info here

    ReplyDelete
  7. Thank you. Matherix 3Dify is currently in private beta, but you can apply for the next round of testing. It is only for Windows, so it seems.

    ReplyDelete
  8. I would like to scan an entire car. What would be a good option?

    ReplyDelete
  9. I assume that the Kinect would be rather un-precise. The total size of a car makes it not too large for a Kinect, though. When you plan on reconstructing the car body with actual surfaces, you could use the rough point-cloud as an underlay. However, the rough point-cloud would not be usable to generate clean and smooth surfaces. In fact, in architecture this is a little less problematic as we can approximate most surfaces with flat faces anyway.

    ReplyDelete
  10. Is there a program I can use to scan a hard copy of my floor plan so that I can alter it on my pc?

    ReplyDelete
  11. While there are automated tools to convert a (scanned) drawing into e.g. DWG or vectorial PDF, you need to have a good scan to begin with (straightened and undistorted preferably). I often use scanned images as underlay inside my BIM software to reconstruct the model. Since most of these scans originate from CAD drawings anyway, they are fairly accurate and the dimensions can be used during reconstruction.

    When tracing old hand-drawn plans and elevations, underlays properly rotated and scaled often display inconsistencies which require a lot of interpretation. That is not possible to automate.

    I would not use a Kinect at all for that. I found out that quick smartphone images, while distorted, often allow a very quick scan + import into the BIM software, but obviously, due to short lenses, with quite some distortion. I seldom use a scanner these days.

    ReplyDelete
  12. You can search for "scan to dwg" programs, if you need to trace images to vectors. Gives more usable results than regular tracing tools, like those inside Illustrator or CorelDRAW. But I did not use any of them.

    ReplyDelete

Post a Comment

Popular posts from this blog

Improve usage of BIM during early design phases

When I was collecting ideas for a book chapter on BIM (that seemed to never have emerged after that), I collected 10 ideas, which I believe still reflect good recommendations to improve the usage of BIM during the early design phases. These ideas are related to BIM software, but you can apply them in any flavor, as long as you can model with Building Elements, Spaces and have control over representation. Introduction This article gives an overview of several recommendations and tips, to better apply BIM applications and BIM methodologies, in the context of the early design phases. Many of these tips are applicable in any BIM application and they are based on experience gathered from teaching, researching and using BIM software. Sometimes they could help software developers to improve the workflow of their particular BIM implementation. Tip 1 : Gradually increase the amount of information In the early design phases, the architect makes assumptions and lays out the main design in...

Getting BIM data into Unity (Part 9 - using IfcConvert)

This is part 9 of a series of posts about getting BIM data into Unity. In this post, we’ll discuss the IfcConvert utility from the IfcOpenShell Open Source IFC Library to preprocess an IFC model for integration with Unity. This is (finally?) again a coding post, with some scripts which are shared to build upon. Conversion of IFC into Unity-friendly formats The strategy with this approach is that you preprocess the IFC-file into more manageable formats for Unity integration. Most Web-platforms do some sort of pre-processing anyway, so what you see in your browsers is almost never an IFC-file, but an optimised Mesh-based geometric representation. However, it wouldn’t be BIM-related if we’d limit ourselves to the geometry, so we will parse the model information as well, albeit using another, pre-processed file. IFC to Wavefront OBJ I used a test IFC-model and used the IfcConvert-utility converted it into OBJ en XML formats. The default way to use it is very simple: ...

Getting BIM data into Unity (Part 8 - Strategies to tackle IFC)

This is part 8 of a series of posts about getting BIM data into Unity. In this post, we’ll discuss IFC as a transfer format towards Unity. As with the previous post, this is not a coding post, although hints and examples are provided. Open BIM and IFC Everybody who ever met me or heard me present on a conference or BIM-lecture will not be surprised to hear that I’m a strong believer in the Industry Foundation Classes (IFC), an open standard, with already two versions published as an ISO standard, being IFC2x2 and IFC4 (but surprisingly not IFC2x3 which is widely used). In the ideal world, this would be the format to use to transfer BIM data into another environment, such as Unity. So what are our options? Looking in the Unity Asset Store Assimp is a library which supports multiple formats, including IFC. https://assetstore.unity.com/packages/tools/modeling/trilib-unity-model-loader-package-91777   I did a few attempts, but alas without any success. It is po...