How to Face tracking with AR using XCode

PS: With MORE screenshots and sample code!

Subscribe to my newsletter and never miss my upcoming articles

Hello everyone, to continue with the Image Tracking tutorial, today I will go over steps for you to use Face tracking with AR using XCode. Let's jump into it!

Quick note: This is most suitable for those who are pretty used to working with XCode, but is rather new to AR

Let's get started with: Setting up the project

Similar to the other XCode tutorial, we will go over the steps to setup a new project

Step 1: Launch Xcode and create a new project

Screen Shot 2021-03-06 at 9.36.39 PM.png

Step 2: Follow the red circles to setup the project and click Next

Screen Shot 2021-03-06 at 9.36.52 PM.png

Step 3: You should be able to name your project and setup additional information for the project. After that, the project should look like this:

Screen Shot 2021-03-06 at 9.37.29 PM.png

Adding assets to the Project

Step 1: Add an assets folder by following the screenshots below

Screen Shot 2021-03-06 at 10.01.26 PM.png

Screen Shot 2021-03-06 at 10.03.43 PM.png

Screen Shot 2021-03-06 at 10.03.57 PM.png

Step 2: Add a Scene inside the .scnassets

Screen Shot 2021-03-06 at 10.06.15 PM.png You can create whatever you want in this 3D environment, for now I will just create a Plane

Screen Shot 2021-03-06 at 10.06.29 PM.png

Screen Shot 2021-03-06 at 10.07.15 PM.png

Edit ViewController with the following lines of code

You can use these lines as reference

First import ARKit

import ARKit

Then create a label in Main.storyboard and add two variable in ViewController

    @IBOutlet var sceneView: ARSCNView!
    @IBOutlet var label: UILabel!
    var action = ""

Screen Shot 2021-03-06 at 11.11.12 PM.png

The label I have here is called "Action"

Main functions:

First, in the viewDidLoad function, initialize the sceneView

    override func viewDidLoad() {
        super.viewDidLoad()

        sceneView.delegate = self
        sceneView.showsStatistics = true

        guard ARFaceTrackingConfiguration.isSupported else {
            fatalError("Face tracking is not supported on this device")
        }
    }

After that, in the viewWillAppear function, create a session and run the view's session

    override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(animated)

        // Create a session configuration
        let configuration = ARFaceTrackingConfiguration()

        // Run the view's session
        sceneView.session.run(configuration)
    }

In the viewWillDisappear function, pause the session

    override func viewWillDisappear(_ animated: Bool) {
        super.viewWillDisappear(animated)

        // Pause the view's session
        sceneView.session.pause()
    }

In the renderer function, initialize the face mesh

    func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
        let faceMesh = ARSCNFaceGeometry(device: sceneView.device!)
        let node = SCNNode(geometry: faceMesh)
        node.geometry?.firstMaterial?.fillMode = .lines
        return node
    }

In the other renderer function right below, setup the face mesh to detect any user behavior and output to the Label

    func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
        if let faceAnchor = anchor as? ARFaceAnchor, let faceGeometry = node.geometry as? ARSCNFaceGeometry {
                faceGeometry.update(from: faceAnchor.geometry)
                expression(anchor: faceAnchor)

                DispatchQueue.main.async {
                    self.label.text = self.action
                }

            }
    }

Add the set of expressions function below

    func expression(anchor: ARFaceAnchor) {
        let mouthSmileLeft = anchor.blendShapes[.mouthSmileLeft]
        let mouthSmileRight = anchor.blendShapes[.mouthSmileRight]
        let cheekPuff = anchor.blendShapes[.cheekPuff]
        let tongueOut = anchor.blendShapes[.tongueOut]
        let jawLeft = anchor.blendShapes[.jawLeft]
        let eyeSquintLeft = anchor.blendShapes[.eyeSquintLeft]


        self.action = "Waiting..."

        if ((mouthSmileLeft?.decimalValue ?? 0.0) + (mouthSmileRight?.decimalValue ?? 0.0)) > 0.9 {
            self.action = "You are smiling. "
        }

        if cheekPuff?.decimalValue ?? 0.0 > 0.1 {
            self.action = "Your cheeks are puffed. "
        }

        if tongueOut?.decimalValue ?? 0.0 > 0.1 {
            self.action = "Don't stick your tongue out! "
        }

        if jawLeft?.decimalValue ?? 0.0 > 0.1 {
            self.action = "You mouth is weird!"
        }

        if eyeSquintLeft?.decimalValue ?? 0.0 > 0.1 {
            self.action = "Are you flirting?"
        }
    }

Some explanation:

The app will detect any face behaviour and show in the "Action" label

Finally: Test the app on an iPhone device

Disclaimer: I do not know if this is accurate or not, but you would need to have a developer account and a registered device to be able to test the app.

More: Image tracking with AR using XCode

No Comments Yet