Overview
ARKit is basically a scaffold between this present reality and the virtual items. It gives the choice to communicate amongst real and virtual items. This demo application gives how to place an object and how to communicate with your virtual items utilizing gestures and hit testing.
Prerequisites
- Xcode 9.3
- iOS 11.3
- Device with A9 processor
Project Setup
Open Xcode and create a new project. Choose “Augmented Reality App” and fill the required details.
Apple provide options for Content Technology like SceneKit, SpriteKit, and Metal. Here we will choose SceneKit. If you want to place any 3D object model then Xcode needs to read that 3D object file in SceneKit supported format(.scn) or .dae.
ARKit is a session-based framework. The session contains a scene that renders virtual objects in the real world. For that, ARKit needs to use an iOS device’s camera. So you have to add this to your info.plist file.
Privacy – Camera Usage Description.
Now in here, we need to set couple of IBOutlet as below with ARSCNView and UILabel, infoLabel for acknowledging user about AR session states and any updates of the node.
@IBOutlet var sceneView: ARSCNView! @IBOutlet var infoLabel: UILabel!
For debugging purpose you can set sceneView.debugOptions = [ARSCNDebugOptions.showFeaturePoints] and can see how ARKit detects surface. When you run the app, you should see a lot of yellow dots on the scene. These are feature points and it is helpful to estimate properties like the orientation and position of physical objects in the current base environment. The more feature points in the area, the better chance ARKit can determine and track the environment.
override func viewDidLoad() { …. // Show statistics such as fps and timing information sceneView.showsStatistics = true sceneView.debugOptions = [ARSCNDebugOptions.showFeaturePoints] let scene = SCNScene() sceneView.scene = scene .... }
Now it’s time to set a world-tracking session with a horizontal plane. As you can see in your viewWillAppear method session has been already created and set to run.
configuration.planeDetection = .horizontal
So now your method will look like this.
override func viewWillAppear(_ animated: Bool) { ... // Create a session configuration let configuration = ARWorldTrackingConfiguration() configuration.planeDetection = .horizontal sceneView.session.run(configuration) ... }
Detect plane and place object
When we detect any surface in ARKit, it will provide ARPlaneAnchor an object. An ARPlaneAnchor object is basically containing information about position & orientation of a real world detected surface.
To know when surface will detect, update or remove , use ARSCNViewDelegate methods which looks like a magic in ARKit. Implement following ARSCNViewDelegate methods so you will be notified when an update is available in sceneview.
override func viewDidLoad() { ... sceneView.delegate = self ... } // MARK: - ARSCNView delegate func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) { // Called when any node has been added to the anchor } func renderer(_ renderer: SCNSceneRenderer, didRemove node: SCNNode, for anchor: ARAnchor) { // This method will help when any node has been removed from sceneview } func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) { // Called when any node has been updated with data from anchor } func session(_ session: ARSession, cameraDidChangeTrackingState camera: ARCamera) { // help us inform the user when the app is ready }
ARSessionDelegate protocol provides current tracking state of the camera so you are able to know that your app is ready to detect or not. When you are getting a normal state, you are ready to detect plane. For that implement these delegates.
// MARK: - ARSessionObserver func sessionWasInterrupted(_ session: ARSession) { infoLabel.text = "Session was interrupted" } func sessionInterruptionEnded(_ session: ARSession) { infoLabel.text = "Session interruption ended" resetTracking() } func session(_ session: ARSession, didFailWithError error: Error) { infoLabel.text = "Session failed: \(error.localizedDescription)" resetTracking() } func resetTracking() { let configuration = ARWorldTrackingConfiguration() configuration.planeDetection = .horizontal sceneView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors]) } func session(_ session: ARSession, cameraDidChangeTrackingState camera: ARCamera) { // help us inform the user when the app is ready switch camera.trackingState { case .normal : infoLabel.text = "Move the device to detect horizontal surfaces." case .notAvailable: infoLabel.text = "Tracking not available." case .limited(.excessiveMotion): infoLabel.text = "Tracking limited - Move the device more slowly." case .limited(.insufficientFeatures): infoLabel.text = "Tracking limited - Point the device at an area with visible surface detail." case .limited(.initializing): infoLabel.text = "Initializing AR session." default: infoLabel.text = "" } }
When plane has been detected, add object onto it. Here we are going to add 3D model named “Shoes_V4.dae”.
class ViewController: UIViewController, ARSessionDelegate { ... var object: SCNNode! ... func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) { // Called when any node has been added to the anchor ... guard let planeAnchor = anchor as? ARPlaneAnchor else { return } DispatchQueue.main.async { self.infoLabel.text = "Surface Detected." } let shoesScene = SCNScene(named: "Shoes_V4.dae", inDirectory: "Model.scnassets") object = shoesScene?.rootNode.childNode(withName: "group1", recursively: true) object.simdPosition = float3(planeAnchor.center.x, planeAnchor.center.y, planeAnchor.center.z) sceneView.scene.rootNode.addChildNode(object) node.addChildNode(object) ... } }
You can get a childNodes name from here. Here it is “group1“.
Now build and run your app. You can see that some surfaces show more feature points and in some area, you can not get much better result compared to others. Surfaces which are shiny or one colored make it difficult for the ARKit to obtain a strong reference point for plane detection and to be able to determine unique points in the environment. If you are unable to see more feature points then move your device around the area and try to detect it with different objects or surfaces. Once ARKit is ready with the detected plane then your object will be added on it.
Change position of object to tap location with UITapGestureRecognizer
For placing an object on tap first add UITapGestureRecognizer in scene view.
override func viewDidLoad() { ... let tapGesture = UITapGestureRecognizer(target: self, action: #selector(didTap(_:))) sceneView.addGestureRecognizer(tapGesture) ... }
Then in the handling of tap gesture add a node at tap position. A node represents the position and the coordinates of an object in a 3D space. Here we set a position of the node to tap position.
@objc func didTap(_ gesture: UIPanGestureRecognizer) { guard let _ = object else { return } let tapLocation = gesture.location(in: sceneView) let results = sceneView.hitTest(tapLocation, types: .featurePoint) if let result = results.first { let translation = result.worldTransform.translation object.position = SCNVector3Make(translation.x, translation.y, translation.z) sceneView.scene.rootNode.addChildNode(object) } }
For getting the translation of worldTransform add this extension.
extension float4x4 { var translation: float3 { let translation = self.columns.3 return float3(translation.x, translation.y, translation.z) } }
Scaling object with UIPinchGestureRecognizer
For zoom-in and zoom-out 3D object we have to change a scale of object while user pinch. For recognize when user pinch on sceneview, add UIPinchGestureRecognizer.
override func viewDidLoad() { ... let pinchGesture = UIPinchGestureRecognizer(target: self, action: #selector(didPinch(_:))) sceneView.addGestureRecognizer(pinchGesture) ... }
Here we set a maximum zoom scale as 2(200% zoom-out then original object) and minimum scale as 0.5(50% zoom-in then original object). You can set it according to your necessities.
@objc func didPinch(_ gesture: UIPinchGestureRecognizer) { guard let _ = object else { return } var originalScale = object?.scale switch gesture.state { case .began: originalScale = object?.scale gesture.scale = CGFloat((object?.scale.x)!) case .changed: guard var newScale = originalScale else { return } if gesture.scale < 0.5{ newScale = SCNVector3(x: 0.5, y: 0.5, z: 0.5) }else if gesture.scale > 2{ newScale = SCNVector3(2, 2, 2) }else{ newScale = SCNVector3(gesture.scale, gesture.scale, gesture.scale) } object?.scale = newScale case .ended: guard var newScale = originalScale else { return } if gesture.scale < 0.5{ newScale = SCNVector3(x: 0.5, y: 0.5, z: 0.5) }else if gesture.scale > 2{ newScale = SCNVector3(2, 2, 2) }else{ newScale = SCNVector3(gesture.scale, gesture.scale, gesture.scale) } object?.scale = newScale gesture.scale = CGFloat((object?.scale.x)!) default: gesture.scale = 1.0 originalScale = nil } }
Rotate object using UIPanGestureRecognizer
For rotation of any object using pan gesture add UIPanGestureRecognizer in sceneview.
override func viewDidLoad() { ... let panGesture = UIPanGestureRecognizer(target: self, action: #selector(didPan(_:))) panGesture.delegate = self sceneView.addGestureRecognizer(panGesture) ... } class ViewController: UIViewController, ARSessionDelegate { ... var currentAngleY: Float = 0.0 ... @objc func didPan(_ gesture: UIPanGestureRecognizer) { guard let _ = object else { return } let translation = gesture.translation(in: gesture.view) var newAngleY = (Float)(translation.x)*(Float)(Double.pi)/180.0 newAngleY += currentAngleY object?.eulerAngles.y = newAngleY if gesture.state == .ended{ currentAngleY = newAngleY } } }
You can also rotate object using UIRotationGestureRecognizer. But this will recognize a rotation using two fingers. Here we used only one finger to rotate object in sceneview.
Thanks for coming.
If you have enjoyed and learned something valuable from this tutorial, please let me know by sharing this tutorial with your friends.