[MUSIC] The scanner that we're going to look at in this video is the Kinect camera. This is the Kinect One. The Kinect 360 is a very similar system. You may be more familiar with them as a video game accessory. You can dance in front of them, and they can let you control characters in the video game in real time using just your body motions. So, they're really great at tracking motion in space, which is a pretty hard problem, and they also make great 3D cameras. So on the old one, you can kind of see the three different parts a little easier, I'll stack them up. But this uses three major components. You have a regular camera for color information, this gives you a live video stream of what's in front of it. Right next to that it has an infrared camera, which a specialized sensor, which is sensitive to just infrared light, that's invisible light. And then off to the side here, you have a infrared projector. So this is a structured light scanner, and what that means is that it projects a pattern onto the object to be scanned. The infrared camera then interprets how that pattern gets distorted. So if you've ever stood in front of a video projector and had that project image distort across your face, you might be able to imagine how this able to calculate what shape you are by measuring the distortion of the pattern that's projected. It needs those two separate parts, the infrared projector and the infrared camera. Because the computer interprets that reflective light, as soon as the sensor starts looking at it, it can create a 3D scan of one side of the object. So it's an instantaneous result of one side of the object. So since it was designed to work with a high power computer like a video game console, and it was designed to get real time information, you're supposed to be playing a video game, It's a very fast way to get a full three hundred sixty degrees of an object. At the fab lab, we use this to make scans of people. It's very quick, very easy way to get 3D scans of heads or busts, and we can do workshops where we 3D print dozens of children's miniature heads. And we can teach the kids how to do it. So this is a really easy fast way to learn it. They're especially good at people. It's kind of designed to track people in space. And algorithms you can look for, sort of, your body structure, your shoulders, your head. So it's good at tracking your movement, if whether you're on an automatic turntable that spins you around like a Lazy Susan, or if you just sit on a spinny chair and spin yourself around, the Kinect software is able to recognize where your body is, how you're moving in space and can reconstruct you that way. But the way it's working really is taking a 3D scan of one side of you, waiting a second or a split second, really, to you spin a little bit more to one side, takes another 3D scan of you of that side, and as you move it stitches those together. It's built with a fixed lens, with a fixed focus, so you have to be within a certain range, whether a meter away, the new one gives you a longer range. You can be several meters away, but anything bigger than that, it's going to be out of focus. It won't be able to process. The laser projector can only reach so far. So, this has kind of a limited use there. For people-sized objects, it's perfect. It's built for that, and it will do it very quickly. One other advantage of this system is that it's among other structured light scanners that might use a projected light pattern or projected lasers, so that we'll look at later. This system uses near infrared red structured light scanning. So mentioned that infrared red projector. And the reason that gives you some special abilities is because materials reflect infrared light differently than they do visible light. So some dark objects or dark colored metals or even things that are reflective or transparent to visible light may or may not be invisible to infrared light. So infrared light just bounces off surfaces a little bit differently than visible light, a different wavelength. So we've talked about the technology of these sensors and the different parts of how it works. So we're going to find a volunteer and show you how a full body scan is done. So we're going to use the connect sensor to get a full body scan of Duncan, or we'll do our best. This is an automatic turntable that was built at the fab lab by some students, the Illinois Makers. It's 3D printed parts, bike inner tube, and some wood that we had lying around. So to get the whole person in focus and in the frame of the camera is a little bit of a challenge, but it's really cool to get a whole statue of your friend, so we're going to give it a shot. So with the Kinect 2, Microsoft provides software to test a lot of the different features of that Kinect. The feature that we're going to be testing out is called the Kinect Fusion, which means it's fusing scans together instantaneously. So I'm going to run that software, and immediately you can see the capture it gets of Duncan. So without me touching anything, it's actually doing a pretty good job already. But we're cutting them down off from the thighs, and there's a couple of things we can do to improve this scan. First of all, in the upper right, you can tell that it's capturing our cameraman and the wall behind it. So, I'm going to set the depth threshold here. So, right now it's 8 meters, and as I adjust this parameter, you can see kind of this black curtain. Approaching the camera until we only have the subject that we care about. The other thing is that Kinect Software has a feature called camera pose finder, which is useful if you are carrying the camera around and like walking around your object. I prefer having the sensor on a tripod, and I am going to deselect that option since I'm not moving the camera around. So uncheck camera pose finder, anytime I want to try to scan again I can hit reset construction. And this is a bit of a game of hitting reset, waiting for the scans, look the way you expect it and hitting save, right here, create mesh. But we're still getting his legs cut off. But we're going to try adjusting in the tripod a little bit. So I move this back. Adjust it down, so we can see his feet. We don't see what the results it's in, so I hit reset. There you go. So now I have more of his legs, not quite his feet yet. So can I have a good luck with in this is starting the scan on the side of the person. It's adding one scan for the other. So it's trying to average these scans out and make them into one object, and as you spin around if it loses track of exactly where you are, you'll end up with two noses as you come around at the front of your face. So I like to get the side view and start the scan when I have the side of them. I just have better look with the soft were line and everything up reset, so see a lot of detail happens right there. His hair was very detailed, but then as soon as he started spinning away, you lose a lot of the detail. So what the software's doing is it's taking one scan after another and actually averaging that data out. So that's kind of a drawback of this, is it's going to smooth a lot of features out, just as a process of doing that instantaneous reconstruction. But this looks decent, so I'm going to hit create mesh on the STL. So that one wasn't perfect, but let's see, we'll try another pose. This looks good. I'll hit reset. When he comes around to the side. Reset. Create mesh. See what we got. So it kind of captured just one side of you. But, it's like woo it's cool. So maybe not suitable for 3D printing as is. [LAUGH] >> Uh-huh. Still looks pretty cool though. >> So we got a pretty cool 3D scan of Duncan. So thank you for volunteering. And so, any time we get a scan from this, it's not going to be ready for 3D print. They'll be some post processing involved. This just captures a surface mesh. So we have to create a water tight volume before you can 3D print. But you can see how quick it was to do one person and see the result. So it's great when you want to do classroom activities or anytime where you just want to kind of power through a list of scans. This is a great device. Very similar to the connect, but with a lot less setup is the structure sensor for the iPad and they use the same technology. Its got the infrared projector, color camera, and infrared camera. But its built into this package straps to an iPad, and the software that's available for it kind of lets you not worry about a lot of the technical details that we dealt with when we did the full body scan. If we look at the app that came with it, structure app, it gives us the option of using infrared, which kind of, you can see the sunlight coming in the window. Or just depth, so it shows us how far away things are. And depth and color lays the color information over it. So if Jeff leans forward, then he gets redder. So that's cool. So this just gives you kind of the raw data of what the sensor can capture. But if you want to do a head scan, like we do so often with the Kinects, then we'll use an app that we picked out called itSeez3D. And it just does a very nice job of capturing people in particular. So my person in front of me is Jeff Ginger. Hi, Jeff. >> Hi, guys. >> He's my boss. >> I'm the director of the Fab Lab. You're going to be meeting me later in the class for the capstone project. And I figured I could use a new Facebook picture, so we're going to give this a go. >> See if it's 3D compatible. All right. So, all it takes is to hit new scan, and it asks if I want to do an object or a human, and he's a human. So we'll go with that. >> [LAUGH] Hopefully. >> And this is a really great app in contrast to the Kinect, where you sort of have to worry about a lot more parameters of focal range >> And like how much detail you want to capture. This just takes care of it for you. And even gives you kind of tips and says, you know what, you should move closer. Optimal distance is one and a half to three and a half feet. So I'll move a little bit closer. And he's going to pick a big smiley face, and I will hit start. >> So it's a really cool effect, it instantaneously generates this model, I can go a little bit lower to the bottom of his chin. I can go a little bit higher to the top of his head. So it's really great and fills in that detail as you move around, and then we have a nice plaster Jeff, and I can hit done. >> Hard to hold a smile that long. >> I can tap Jeff's face and do a local preview. So local preview, It'll take about a minute to process this. It's a little bit different with the Kinect that made it a lot of fun for a class room as you get the result right away. There's a little bit of sitting around waiting for the computer to figure things out with this, but the result is going to be really nice. It's worth the wait. And there's the photograph. You have like extra hand parts. >> Geeze. >> But besides that it did a nice job grabbing you. So we edit those out, or do we have to do that on a computer later? >> I don't think this app has those kind of editing things built in. So, what's this export? Previews and out presses the high quality model, and then you'll be able to share it. So the post processing, well you know it already fills you in here, it gave you a bottom so we don't even need to wdit this besides separating that extra object. Yeah, so this is going to be a really good process for people who don't feel like they're technically talented enough to, like, fiddle with parameters until they get the right result, or edit the model after the fact. This program is built to give you 3D principle results right after doing the scan. [MUSIC] [SOUND]