Holographic UI in VR with Unity

Divider

Share

Introduction to programming UI in VR in Unity

Humans interact with computers in many ways; the interface between humans and computers is crucial to facilitating this interaction. This user interface is supposed to bring the feeling of comfort while using any kind of computer/console.

In addition to that, one of the most important research areas in the field of Human-Computer-Interaction (HCI) is gesture recognition as it provides a natural and intuitive way to communicate between people and machines. Practically, instead of pressing switches, touching monitor screens, twisting knobs, and raising voices, work can be simply done by pointing fingers, waving hands, movements of bodies and so on. This will lead to better user experience and satisfaction.

Oculus hardware currently supports two “tracking” ways, including the touch controllers and hand tracking feature. In an effort to create immersive experiences and pleasant User Interfaces within Virtual Reality we wil create a Natural User Interface, exploiting Oculus’ hand tracking feature and its provided SDK.

Following our article on how to set up our Oculus within Unity in order to develop VR applications, in this article we will explore the part of interacting in a VR environment with a basic UI made from scratch.

For this project we will work within the Unity Editor along with the OVR library plugin for Unity.

Index

Introduction to programming UI in VR in Unity

Setting up our virtual self

Step 1 – Adding the OVR plugin prefabs

Step 2 – OVRCamera

Step 3 – OVRHands

Programming Input Recognition

Step 1 – Setting up references

Step 2 – Adding colliders

Creating a Holographic Menu Panel

Step 1 – Scene setup

Step 2 – Listening to the collision events (Input from closeup)

Step 3 – Scripting Panel

Step 4 – Listening to Raycast events (Input from further distance)

Creating in-app player motion

Setting up our virtual self

Step 1 – Adding the OVR plugin prefabs

To get started with VR development in Unity, we need to install the Oculus integration package, as mentioned in the previous article. This plugin consists of a set of components, scripts and other features that facilitate and enhance application development for Oculus headsets.

Having installed the package, we can search for the following prefabs in the Project section within our Editor:

  • OVRCameraRig
  • OVRCustomHandPrefab_L
  • OVRCustomHandPrefab_R

Step 2 – OVRCamera

The OVRCamera rig represents our head and set of eyes in the virtual world. Correspondingly, the hands prefabs represent our virtual hands.

For the sake of our goal of programming gestures with our fingers, we shall create such hierarchy to facilitate the controlling of both hands references in our script.

“Hands” empty gameobject will represent the parent of all, managing code that fires finger events(ex. pinch, grasp, etc).

Adding the hands as children of the OVRCameraRig will assist us later while implementing the Locomotion feature, as for example when teleporting we will need to move our head and hands at the same spot, thus a reference parent GameObject would ease this situation.

Step 3 – OVRHands

After adding the hands, we realize that Unity displays them pink as if there is some material missing reference. We can fix this by creating and adding a custom material to our hands’ mesh.

Programming Input Recognition

Diving into the code part, we may start by creating some scripts that will allow our application to understand whether our left or right hand is pinching and when, so that we may trigger UI interactions using this event.

Step 1 – Setting up references

The initial step is to serialize all the necessary references to our “Hands” script which will initialize the colliders on our fingers.

In addition, we will create a “VirtualHand.cs” script where we will define action events to be fired for “Pinch” gesture, as well as the hand and finger references and a boolean to assist us in detecting the pinch.


public class VirtualHand : MonoBehaviour
{
    public Action<OVRHand> PinchStarted;
    public Action<OVRHand> PinchEnded;


    [SerializeField]
    private OVRHand hand = default;

    [SerializeField]
    private OVRHand.HandFinger finger = OVRHand.HandFinger.Index;

    private bool handWasPinching;…}

Step 2 – Adding colliders

Since we are creating the UI from scratch using 3D elements and disregarding Unity’s input system, we will need to add colliders on the tips of our fingers, so that we can trigger collision that will take place between our fingers and any 3D virtual elements. To achieve this, we have created a script that references both our hands and adds the colliders accordingly.

The method that does this is the following:

private void AddCollidersOnHand(OVRSkeleton skeleton)
    {
        foreach (OVRBone bone in skeleton.Bones)
        {
            if (bone.Id == OVRSkeleton.BoneId.Hand_IndexTip)
            {
                SphereCollider indexCollider = bone.Transform.gameObject.AddComponent<SphereCollider>();
                indexCollider.radius = 0.01f;
            }
        }
    }

To call this method we will need to wait for the application to initialize all of our bones structure, otherwise the “skeleton.Bones” is always 0 and the colliders are never added properly. Hence, we can call this function in a Coroutine Start() as such:

IEnumerator Start()
    {
        while (leftHandSkeleton.Bones.Count == 0 || rightHandSkeleton.Bones.Count == 0)
        {
            yield return null;
        }

        AddCollidersOnHand(leftHandSkeleton);
        AddCollidersOnHand(rightHandSkeleton);

    }

To demonstrate the whole script would look something like the following:

public class Hands : MonoBehaviour
{
    [SerializeField]
    private OVRSkeleton handLeftSkeleton = null;
    [SerializeField]
    private OVRSkeleton handRightSkeleton = null;

    IEnumerator Start()
    {
        while (handLeftSkeleton.Bones.Count == 0 || handRightSkeleton.Bones.Count == 0)
        {
            yield return null;
        }

        AddCollidersOnHand(handLeftSkeleton);
        AddCollidersOnHand(handRightSkeleton);

    }
    private void AddCollidersOnHand(OVRSkeleton skeleton)
    {
        foreach (OVRBone bone in skeleton.Bones)
        {
            if (bone.Id == OVRSkeleton.BoneId.Hand_IndexTip)
            {
                SphereCollider indexCollider = bone.Transform.gameObject.AddComponent<SphereCollider>();
                indexCollider.radius = 0.01f;
            }
        }
    }
}

To continue to the VirtualHand:

private void Update()
    {
        bool handCurrentlyPinching = IsPinching();

        if (handCurrentlyPinching != handWasPinching)
        {
            if (handCurrentlyPinching)
            {
                PinchStarted?.Invoke(hand);
                Debug.Log(hand +  ” hand pinching”);
            }
            else
            {
                PinchEnded?.Invoke(hand);
                Debug.Log(hand + ” hand stopped pinching”);
            }
        }
        handWasPinching = handCurrentlyPinching;
    }

    public bool IsTrackingGood(OVRHand hand)
    {
        return hand.HandConfidence == OVRHand.TrackingConfidence.High;
    }

    public bool IsPinching()
    {
        return hand.GetFingerIsPinching(finger);
    }

OVR plugin provides us with the method “GetFingerIsPinching()” which we can utilize to create the pinch events starting and ending.

To elaborate further, we use a boolean to detect when the pinch is taking place and fire the appropriate event.

Hence, we can detect within the Update() whether our index finger on either hand has pinched or finished pinching:

Creating a Holographic Menu Panel

Step 1 – Scene setup

We may start simply, by creating only the panel and a button, so that we can see the gestures hitting the corresponding colliders. We are specifically using 3D elements instead of Unity’s built-in UI so that we can code the input events from scratch, using physics. If we chose to use Unity’s UI system we would have to mannually convert the input system to the one provided from the OVRplugin.

Initially, we need an empty GameObject which will parent all our UI elements and also withhold the scripts that define the input and interaction handlers, let’s call this “SettingsPanel” GameObject. Within this GameObject we can add a the “Container”, designed to look like a Panel.

To achieve this, we can add the “SpriteRenderer” component on our empty GameObject and assign a material for our panel.

In addition, a 3D TextMeshPro component could provide us with titling indicators.

Our hierarchy architecture will assist in manipulating the position of the holographic UI in the 3D virtual world.

Note

It is significant to manually add colliders to the UI elements that we design from scratch so that we can detect collisions. Hence, we can add a BoxCollider component on our Button GameObject and manually adjust its size to contain the button in order to trigger it when the finger sphere collider enters the box collider.

Step 2 – Listening to the collision events (Input from closeup)

An additional step toward input recognition for our Holographic panel, is to listen to the collision event, fired between the fingers’ colliders and the box collider of our button. We created a script that can be attached to any UI element that we will design in the future and is the following:

public void HighLightComponent()
{
    buttonText.color = Color.blue;
    buttonSprite.color = Color.blue;
}

public void SelectComponent()
{
    buttonText.color = Color.green;
    buttonSprite.color = Color.green;
}

public void Reset()
{
    buttonText.color = Color.white;
    buttonSprite.color = Color.white;
}

private void OnTriggerEnter(Collider other)
{
    if(other.gameObject.layer == LayerMask.NameToLayer(“Player”))
    {
        SelectComponent();
        Debug.Log(“Collided with : ” + other.name);
    }
       
}

As of now we can simply serialize the “buttonText” and “buttonSprite”, since we only have one UI element to test.

Step 3 – Scripting Panel

Quickly we will realize that it is quite cumbersome to interact with a static panel in a 3D virtual world where we are unable to move yet. Thus, a script that will adjust our panel’s position according to our head’s position is ideal to facilitate in debugging our interactions, as well as the overall user experience.

To do so we can create a PanelAdjustor script that will reference our Head position and move the UI accordingly.

void Update()
    {
        float cameraHeight = headTransform.position.y + headHeight;
        float distance = Mathf.Abs(panel.position.y – cameraHeight);

        Vector3 newHeight = new Vector3(panel.position.x, cameraHeight, panel.position.z);

        Quaternion newRotation = Quaternion.Euler(0,   headTransform.rotation.eulerAngles.y, 0);
        newRotation = Quaternion.Lerp(panel.transform.localRotation, newRotation, smoothnessRotation);

        Vector3 newPosition = Vector3.Lerp(panel.position, new Vector3(headTransform.position.x, cameraHeight, headTransform.position.z +1f), Time.fixedDeltaTime * smoothness);

        panel.transform.SetPositionAndRotation(newPosition, newRotation);
    }
}

The end result:

Step 4 – Listening to Raycast events (Input from further distance)

We can further ease our interaction with the panel, by adding the feature of triggering the UI from further distance, by casting a ray from our pointing finger towards the panel. \\

To do so we would need to code a cursor that will be projected on the panel from our distant finger and highlight our UI element (button) when the raycast is hitting the specific component. In addition, we will listen to the pinch event from our hands to be able to actually click the button from distance if we pinch our right hand.

To create this functionality, we need a script that will draw a ray from our finger to the panel, so that we can visually see our selection, as well as a script that will handle the raycast for the highlighting and selection of the button.

Our hierarchy includes a parent GameObject: TeleCursor

Additionally, we need a GameObject that will draw the line using a LineRenderer component:

In detail, the Laser script contains a function that draws the line programmatically:

private List<Vector3> pointList = new List<Vector3>();
private int resolution = 24;

public void DrawLine(Vector3 start, Vector3 end, Gradient color)
{
pointList.Clear();

for (float r = 0; r <= 1; r += 1.0f / resolution)
{
Vector3 bezierPoint = Vector3.Lerp(start, end, r);

pointList.Add(bezierPoint);
}

lineRenderer.positionCount = pointList.Count;
lineRenderer.SetPositions(pointList.ToArray());
lineRenderer.colorGradient = color;
}

Subsequently, the TeleCursor script can call this function when the Raycast is actually hitting the correct element.

Note

To properly handle raycast events on the UI, it is significant to set up an additional layer that will mask the raycast to distinguish the UI from the rest of the 3D elements that we might add in our scene later. Thus, we created a layer called “HolographicUI” and filter the raycast in our code as such:

void Update()
{
    direction = startPoint.forward;

    RaycastHit hit;
    if (Physics.Raycast(startPoint.position, direction, out hit))
    {
        hitPoint = hit;

        if (hit.distance > minDistance && hit.transform.gameObject.layer == LayerMask.NameToLayer(“HolographicUI”))
        {
           
            ring.gameObject.SetActive(true);
            laser.gameObject.SetActive(true);


            ring.position = hit.point;
            ring.LookAt(hit.point + hit.normal);

            Gradient gradient = isPinching ? gradientPinch : gradientDefault;
            laser.DrawLine(startPoint.position, hit.point, gradient);


            holoUIComponent = hit.collider.GetComponent<HoloUIComponent>();
            if (holoUIComponent != null)
                holoUIComponent.HighLightComponent();
            if (hands.IsPinchingRight() && holoUIComponent != null)
                holoUIComponent.SelectComponent();
       
        }
        else
        {
            laser.gameObject.SetActive(false);
            ring.gameObject.SetActive(false);
            if (holoUIComponent != null)
                holoUIComponent.Reset();
            holoUIComponent = null;
        }
    }
    else
    {
        laser.gameObject.SetActive(false);
        ring.gameObject.SetActive(false);
    }
}

While listening to the pinchStarted event we can trigger the “HightlightComponent” / “SelectComponent” accordingly.

void Start()
    {
      handRight.PinchEnded += OnPinchingRightHand;
    }

    // Update is called once per frame
    void Update()
    {
        direction = startPoint.forward;

        RaycastHit hit;
        if (Physics.Raycast(startPoint.position, startPoint.forward, out hit))
        {
            if ( hit.transform.gameObject.layer == LayerMask.NameToLayer(“HolographicUI”))
            {
                laser.gameObject.SetActive(true);
                ring.gameObject.SetActive(true);

                ring.position = hit.point;
                ring.LookAt(hit.point + hit.normal);

                Gradient gradient = isPinching ? gradientPinch : gradientDefault;
                laser.DrawLine(startPoint.position, hit.point, gradient);


                uIComponent = hit.collider.GetComponent<HoloUIComponent>();
                if (uIComponent != null)
                {
                    holoUIComponent.HighLightComponent();
                    if (handRight.IsPinching())
                        holoUIComponent.SelectComponent();
                }
                else
                    holoUIComponent.Reset();

            }
            else
            {
                laser.gameObject.SetActive(false);
                ring.gameObject.SetActive(false);
            }
        }
        else
        {
            laser.gameObject.SetActive(false);
            ring.gameObject.SetActive(false);
        }
    }

    private void OnPinchingRightHand(OVRHand hand)
    {
        if (holoUIComponent != null)
            holoUIComponent.Reset();

    }

The result of our scripts should look like:

Creating in-app player motion

The final step of our tutorial is to create a basic Locomotion feature, so that we can somehow move within the Virtual world.

We have designed a simple Teleportation feature that will allow us to aim on the ground while pinching our left hand and if we release the pinch we teleport to the spot we previously aimed for.

For this feature we have created a Locomotion script and gameObject that will handle the functionality.

Using a duplicate of the Laser, we can add a function on the Laser.cs that instead of a straight line, will draw a bezier curve that is easier on the eyes.

Here, you may find an example reference on how to draw Bezier Curves.

Having done this we can code our Locomotion script to listen to the left hand pinch start and end and handle the drawing and teleporting accordingly.


 void Start()
    {
        handLeft.PinchEnded += OnLeftHandPinchingEnd;
    }
    // Update is called once per frame
    void Update()
    {
        if (handLeft.IsPinching())
        {
            laser.gameObject.SetActive(true);
            direction = startPoint.forward;

            RaycastHit hit;
            if (Physics.Raycast(startPoint.position, direction, out hit))
            {
                hitPoint = hit;
                ring.gameObject.SetActive(true);
                laser.gameObject.SetActive(true);


                ring.position = new Vector3(hit.point.x, hit.point.y + 0.05f, hit.point.z);
                ring.LookAt(hit.point + hit.normal);

                laser.DrawCurve(startPoint.position, hit.point, gradientTeleport);
            }
        }
        else
        {
            laser.gameObject.SetActive(false);
            ring.gameObject.SetActive(false);
        }
           

    }

    private void OnLeftHandPinchingEnd(OVRHand hand)
    {
        cameraRig.transform.position = hitPoint.point;
    }

Finally, we can Teleport!

Tim Laning

Business developer

Do you want to know more about the possibilities of serious games? Let’s discuss what serious games can do for you.

Divider

Related articles