Quantcast
Channel: Sensors
Viewing all 82 articles
Browse latest View live

Location Data Logger Design and Implementation, Part 6: The Export Class

$
0
0

This is part 6 of a series of blog posts on the design and implementation of the location-aware Windows Store app "Location Data Logger". Download the source code to Location Data Logger here.

The Export Class

The early development versions of Location Data Logger only logged data points to CSV files. This was fairly easy to implement and the code responsible for the file I/O was mixed in with the MainPage class. When the logger was started the log file was created directly inside of what would later become the logger_start() method, and log points were written out inside of update_position(). While this was not great modular design, at the time it was fast and simple, more than adequate for the task, and certainly appropriate for an early phase of development. Long term, however, I needed something more scalable, and more robust.

The goal for Location Data Logger was always to be able to write to multiple file formats. This is something that is supported by nearly every other GPS data logger app that you can find for mobile devices, as well as by dedicated consumer data loggers, and I did not want to release a less capable app for this project. And, on top of that, I didn't want Location Data Logger to be a merely academic exercise: as an instructional tool, I felt it important to have just enough complexity to require some thoughtful design.

At minimum, the code for handling the file operations would have to move out of the MainPage class and into its own module. The real question, though, was how to handle the mutliple file formats.

One solution would have been to have one large exporter that was simply responsible for everything, but that felt unwieldly. Though it would allow for sufficient code reuse, the fact that each file format had its own dependancy modules and its own quirks (I'm looking at you, KML) meant that it would be a monolithic class with everything but the kitchen sink. On top of that, I'd need to either pass flags to tell the module which file formats were actively being logged, or created individual methods for logging each independantly as needed. While a valid approach, it flies in the face of object-oriented design principles.

The approach I chose was to make use of inheritence, creating a master Exporter class with child classes for managing each file format.

The base class

There are three basic logging operations in Location Data Logger, and operations that are common to all file formats are implemented in the base class which is called Export:

  1. Start the logging. This entails opening the log file on the filesystem and getting a handle to the file.
  2. Stop the logging. Close the file.
  3. Write to the log file. Write out one or more data points.

Note that I said "common to all file formats". The implication here is that some file formats require special handling for one or more of these operations, but at minimum they all have some basic stuff that they need to do. Specifically, open the file, close the file, and write text to it. Any customization to these three operations is handled within the child classes.

Note that, while the Export class defines a Write() method for writing to the file, this is a low-level function. Trackpoints are logged by calling the LogPoint() method defined in the child classes, which in turn call Write() as needed.

The Export class also defines two class members:

protected StorageFile file;
protected enum ExportStatus { Inactive, Initializing, Ready };
protected ExportStatus status;

The file variable is of type StorageFile and is the handle that each module uses to write to the underlying, physical file. The Start() method in the base class is responsible for opening this handle and setting file.

The status variable provides some basic bookkeeping. It's an enumeration with three possible values:

  • Initializing. The logger is in the process of opening the file, which is an asynchronous operation.
  • Ready. The log file was successfully opened and the Export object can write to the file.
  • Inactive. The log file is not open for writing. This means it has either not yet been opened, that it has been explicitly closed because logging was stopped, or a failure occured during an I/O operation.

A child object can use this status variable to make intelligent decisions about what file operations should or should not be attempted. (Ideally, the base Export object would also have this logic as a precaution. This is something I should add in a future version.)

The ExportCSV class

The ExportCSV class is the simplest of the export modules because the file format does not have a complicated schema. Data is organized in rows, with commas separating each field or column, and the first row contains the field names. Thus, the overload for the Start() method is very short, and is used to print the header row:

public async void Start(StorageFolder folder, String basename)
{
    pid = 1;

    await base.Start(folder, basename, extension);

    if (status != Export.ExportStatus.Ready) return;
    try
    {
        await this.Write("PID,Latitude,Longitude,Accuracy,Altitude,AltitudeAccuracy,Speed,Heading,Orientation,HighPrecision,Timestamprn");
    }
    catch
    {
        status = Export.ExportStatus.Inactive;
    }
}

Note that I use exception handling to set the status property to Inactive in the event of a failure. This will prevent the logger from attempting to write to a file that is not open.

No overloading is needed for the Stop() method. The LogPoint() method merely prints a line to the CSV file every time a track point comes in.

The ExportGPX class

This module is more complicated than the CSV module because a GPX file is an XML data file with a set schema. I had two choices here: either create the XML by hand, or use one of the XML libraries. I opted for the former because the GPX format is not very complicated, particularly for recording a single track log. It also offers slightly better crash protection, since writing the XML out as track points come in means that the file will be mostly complete (missing just the closing XML tags) in the event the app quits unexpectedly. Using an XML builder would require writing the whole file out perioidically, and when the logger is stopped. That can cause data loss in the event of a crash.

Like the ExportCSV module, then, the Starrt() method overload is used to print the preamble for the file, which in this case is a large chunk of XML. The LogPoint() method is similarly used to print the XML for each track point as they come in. Unlike the CSV module, however, this one needs an override for Stop() so that the closing XML tags can be printed:

const String footer= "</trkseg>rn</trk>rn</gpx>rn";

public override async void Stop()
{
    if (status == Export.ExportStatus.Ready)
    {
        try
        {
            await this.Write(footer);
        }

        catch
        { }
    }
    base.Stop();
 }

The ExportKML class

This is the most complicated of the exporters because a KML file has an elaborate schema, and there is no practical way to build the XML as you go. For this reason, I opted to use the XML classes in the Windows* Runtime to build the data file, and only write it out when the logger is stopped. One implication of doing this is that there is no crash protection: if the app quits unexpectedly, the KML file will not be generated. It would be good to add support for periodic writes (perhaps once or twice a minute) in future versions.

The Start() method sets up the base XML document structure and defines the parent elements that must be referenced when new log points are added. The LogPoint() method creates the XML for each log point, and adds it to the appropriate parent element. The Stop() method finishes up some final XML structures and then writes the whole thing out.

Calling the Export modules

The export modules are members of the DataLogger object. Logging a data point is quite simple, and done in the log_position() method which is called from the geo_PositionChanged event handler.

if (logCSV) eCSV.LogPoint(trkpt);
if (logGPX) eGPX.LogPoint(trkpt);
if (logKML) eKML.LogPoint(trkpt);

Opening and closing the export modules is similarly easy, and handled within the Start() and Stop() methods in DataLogger:

public void Start()
{
    basename = DateTime.Now.ToString("yyyyMMdd_HHmmss");

    Resume();

    running = true;            

    if (logCSV) eCSV.Start(folder, basename);
    if (logGPX) eGPX.Start(folder, basename);
    if (logKML) eKML.Start(folder, basename);
}

public void Stop()
{
    running = false;
    if (logCSV) eCSV.Stop();
    if (logGPX) eGPX.Stop();
    if (logKML) eKML.Stop();

    Pause();
}

← Part 5: The Data Grid View
  • gps geolocation location gnss
  • Icon Image: 


    Intel® Perceptual Technology Development Lessons by Massimo Bonanni

    $
    0
    0

    La Intel® Perceptual Technology sta suscitando grandissimo interesse in ogni ambito di sviluppo tecnologico ed informatico. E' un argomento affascinante e che apre una miriade di scenari possibili in cui l'unico vero limite è quello della fantasia dello sviluppatore. Dal campo industriale, a quello medico e dell'health care, dall'educazione all'entertainment, tutti gli sviluppatori sono consci che sia necessario integrare questo tipo di tecnologia all'interno dei propri progetti donando interfacce intuitive totalmente naturali e touchless.

    A questo scopo uno di migliori e più dotati Microsoft* MVP (Most Value Professional) italiani, Massimo Bonanni (Senior .NET Developer), si è preso carico di preparare alcuni aperativi informatici, interessanti lezioni sull'argomenti Perceptual Computing, per approfondire l'argomento e cominciare a conoscere da vicino il framework sviluppando qualche semplice soluzione.

    Di seguito la lista degli articoli tecnici scritti da Massimo per i lettori di Intel® Developer Zone:

    Sul sito HTML.it invece potete leggere:

    Buona lettura! 

  • Intel Perceptual Computing SDK
  • Icon Image: 

    Ultrabook™ and Tablet Windows* 8 Sensors Development Guide

    $
    0
    0

    Introduction


    This guide gives developers an overview of the Microsoft Windows 8.1 sensors application programming interfaces (APIs) for Windows 8.1 Desktop and Windows Store applications with a specific focus on the various sensor capabilities available in Windows 8.1 Desktop mode. This Development Guide summarizes the APIs that enable creating interactive applications by including some of the common sensors such as accelerometers, magnetometers, and gyroscopes with Windows 8.1.

    Programming Choices for Windows 8.1


    Developers have multiple API choices to program sensors on Windows 8.1. The touch-friendly app environment is called “Windows Store apps.” Windows Store apps can run software developed with the Windows Run-Time (WinRT) interface. The WinRT sensor API represents a portion of the overall WinRT library. For more details, please refer to the MSDN Sensor API library.

    Traditional Win Forms, or MFC-style apps are called “Desktop Apps” because they run in the Desktop Windows Manager environment. Desktop apps can either use the native Win32*/COM API, a .NET-style API or a subset of select WinRT APIs. 

    The following is a list of WinRT APIs that can be accessed by Desktop apps:

    • Windows.Sensors (Accelerometer, Gyrometer, Ambient Light Sensor, Orientation Sensor...)
    • Windows.Networking.Proximity.ProximityDevice (NFC)
    • Windows.Device.Geolocation (GPS)
    • Windows.UI.Notifications.ToastNotification
    • Windows.Globalization
    • Windows.Security.Authentication.OnlineId (including LiveID integration)
    • Windows.Security.CryptographicBuffer (useful binary encoding/decoding functions)
    • Windows.ApplicationModel.DataTransfer.Clipboard (access and monitor Windows 8 Clipboard)

    In both cases, the APIs go through a Windows middleware component called the Windows Sensor Framework. The Windows Sensor Framework defines the Sensor Object Model. The different APIs “bind” to that object model in slightly different ways.

    Differences in the Desktop and Windows Store app development will be discussed later in this document. For brevity, we will consider only Desktop app development. For Windows Store app development, please refer to the API Reference for Windows Store apps.

    Sensors


    There are many kinds of sensors, but we are interested in the ones required for Windows 8.1, namely accelerometers, gyroscopes, ambient light sensors, compass, and GPS. Windows 8.1 represents the physical sensors with object-oriented abstractions. To manipulate the sensors, programmers use APIs to interact with the objects. The following table provides information on how the sensors can be accessed from both the Windows 8 Desktop apps as well as from Windows Store apps.

    Windows 8.1 Desktop Mode Apps

    Windows Store Apps

    Feature/Toolset

    C++

    C#/VB

    JavaScript*/ HTML5

    C++, C#, VB & XAML
    JavaScript/HTML5

    Unity* 4.2

    Orientation Sensors
    (accelerometer, 
    inclinometer, gyrometer)

    Yes

     

    Yes

     

    Yes

     

     

    Yes

    Yes

    Yes

    Yes

    Light Sensor

    Yes

    Yes

    Yes

    Yes

    Yes

    NFC

    Yes

    Yes

    Yes

    Yes

    Yes

    GPS

    Yes

    Yes

    Yes

    Yes

    Yes

    Table 1.Features Matrix for Windows* 8.1 Developer Environments 

    Below, Figure 1 identifies that there are more objects than actual hardware. Windows defines some “logical sensor” objects by combining information from multiple physical sensors. This is called “Sensor Fusion.”

    Figure 1. Different sensors supported, starting on Windows* 8

    Sensor Fusion

    Physical sensor chips have some inherent natural limitations. For example:

    • Accelerometers measure linear acceleration, which is a measurement of the combined relative motion and the force of Earth’s gravity. If you want to know the computer’s tilt, you’ll have to do some mathematical calculations.
    • Magnetometers measure the strength of magnetic fields, which indicate the location of the Earth’s Magnetic North Pole.

    These measurements are subject to an inherent drift problem, which can be corrected by using raw data from the Gyro. Both measurements are (scaled) dependent upon the tilt of the computer from level with respect to the Earth’s surface. For example, to obtain the computer’s heading with respect to the Earth’s True North Pole (Magnetic North Pole is in a different position and moves over time), corrections must be applied.

    Sensor Fusion (Figure 2) is defined by obtaining raw data from multiple physical sensors, especially the Accelerometer, Gyro, and Magnetometer, performing mathematical calculations to correct for natural sensor limitations, computing more human-usable data, and representing those as logical sensor abstractions. The application developer must implement the necessary transformations required to translate physical sensor data to the abstract sensor data. If the system design has a SensorHub, the fusion operations will take place inside the microcontroller firmware. If the system design does not have a SensorHub, the fusion operations must be done inside one or more device drivers that the IHVs and/or OEMs provide.

    Figure 2. Sensor fusion via combining output from multiple sensors

    Identifying Sensors

    To manipulate a sensor, a system is needed to identify and refer to. The Windows Sensor Framework defines a number of categories that sensors are grouped into. It also defines a large number of specific sensor types. Table 2 lists some of the sensors available for Desktop applications.

    “All”

    Biometric

    Electrical

    Environmental

    Light

    Location

    Mechanical

    Motion

    Orientation

    Scanner

    Human Presence

    Capacitance

    Atmospheric Pressure

    Ambient Light

    Broadcast

    Boolean Switch

    Accelerometer 1D

    Compass 1D

    Barcode

    Human Proximity*

    Current

    Humidity

    Gps

    Boolean Switch Array

    Accelerometer 2D

    Compass 2D

    Rfid

    Touch

    Electrical Power

    Temperature

    Static

    Force

    Accelerometer 3D

    Compass 3D

    Inductance

    Wind Direction

    Multivalue Switch

    Gyrometer 1D

    Device Orientation

    Potentio-meter

    Wind Speed

    Pressure

    Gyrometer 2D

    Distance 1D

    Resistance

    Strain

    Gyrometer 3D

    Distance 2D

    Voltage

    Weight

    Motion Detector

    Distance 3D

    Speedometer

    Inclinometer 1D

    Inclinometer 2D

    Inclinometer 3D

    Table 2. Sensor types and categories 

    The sensor types required by Windows 8 are shown in bold font:

    • Accelerometer, Gyro, Compass, and Ambient Light are the required “real/physical” sensors
    • Device Orientation and Inclinometer are the required “virtual/fusion” sensors (note that the Compass also includes fusion-enhanced/tilt-compensated data)
    • GPS is a required sensor if a WWAN radio exists, otherwise GPS is optional
    • Human Proximity is an oft-mentioned possible addition to the required list, but, for now, it’s not required.

    All of these constants correspond to Globally Unique IDs GUIDs. Below, in Table 3, is a sample of some of the sensor categories and types, the names of the constants for Win32/COM and .NET, and their underlying GUID values.

    Identifier

    Constant (Win32*/COM)

    Constant (.NET)

    GUID

    Category “All”

    SENSOR_CATEGORY_ALL

    SensorCategories.SensorCategoryAll

    {C317C286-C468-4288-9975-D4C4587C442C}

    Category Biometric

    SENSOR_CATEGORY_BIOMETRIC

    SensorCategories.SensorCategoryBiometric

    {CA19690F-A2C7-477D-A99E-99EC6E2B5648}

    Category Electrical

    SENSOR_CATEGORY_ELECTRICAL

    SensorCategories.SensorCategoryElectrical

    {FB73FCD8-FC4A-483C-AC58-27B691C6BEFF}

    Category Environmental

    SENSOR_CATEGORY_ENVIRONMENTAL

    SensorCategories.SensorCategoryEnvironmental

    {323439AA-7F66-492B-BA0C-73E9AA0A65D5}

    Category Light

    SENSOR_CATEGORY_LIGHT

    SensorCategories.SensorCategoryLight

    {17A665C0-9063-4216-B202-5C7A255E18CE}

    Category Location

    SENSOR_CATEGORY_LOCATION

    SensorCategories.SensorCategoryLocation

    {BFA794E4-F964-4FDB-90F6-51056BFE4B44}

    Category Mechanical

    SENSOR_CATEGORY_MECHANICAL

    SensorCategories.SensorCategoryMechanical

    {8D131D68-8EF7-4656-80B5-CCCBD93791C5}

    Category Motion

    SENSOR_CATEGORY_MOTION

    SensorCategories.SensorCategoryMotion

    {CD09DAF1-3B2E-4C3D-B598-B5E5FF93FD46}

    Category Orientation

    SENSOR_CATEGORY_ORIENTATION

    SensorCategories.SensorCategoryOrientation

    {9E6C04B6-96FE-4954-B726-68682A473F69}

    Category Scanner

    SENSOR_CATEGORY_SCANNER

    SensorCategories.SensorCategoryScanner

    {B000E77E-F5B5-420F-815D-0270ª726F270}

    Type HumanProximity

    SENSOR_TYPE_HUMAN_PROXIMITY

    SensorTypes.SensorTypeHumanProximity

    {5220DAE9-3179-4430-9F90-06266D2A34DE}

    Type AmbientLight

    SENSOR_TYPE_AMBIENT_LIGHT

    SensorTypes.SensorTypeAmbientLight

    {97F115C8-599A-4153-8894-D2D12899918A}

    Type Gps

    SENSOR_TYPE_LOCATION_GPS

    SensorTypes.SensorTypeLocationGps

    {ED4CA589-327A-4FF9-A560-91DA4B48275E}

    Type Accelerometer3D

    SENSOR_TYPE_ACCELEROMETER_3D

    SensorTypes.SensorTypeAccelerometer3D

    {C2FB0F5F-E2D2-4C78-BCD0-352A9582819D}

    Type Gyrometer3D

    SENSOR_TYPE_GYROMETER_3D

    SensorTypes.SensorTypeGyrometer3D

    {09485F5A-759E-42C2-BD4B-A349B75C8643}

    Type Compass3D

    SENSOR_TYPE_COMPASS_3D

    SensorTypes.SensorTypeCompass3D

    {76B5CE0D-17DD-414D-93A1-E127F40BDF6E}

    Type DeviceOrientation

    SENSOR_TYPE_DEVICE_ORIENTATION

    SensorTypes.SensorTypeDeviceOrientation

    {CDB5D8F7-3CFD-41C8-8542-CCE622CF5D6E}

    Type Inclinometer3D

    SENSOR_TYPE_INCLINOMETER_3D

    SensorTypes.SensorTypeInclinometer3D

    {B84919FB-EA85-4976-8444-6F6F5C6D31DB}

    Table 3. Example of Constants and Globally Unique IDs (GUIDs) 

    Above are the most commonly used GUIDs; there are many available. At first you might think that the GUIDs are silly and tedious, but there is one good reason for using them: extensibility. Since the APIs don’t care about the actual sensor names (they just pass GUIDs around), it is possible for vendors to invent new GUIDs for “value add” sensors.

    Generating New GUIDs

    Microsoft provides a tool in Visual Studio* for generating new GUIDs. Figure 3 shows a screenshot from Visual Studio for doing this. All the vendor has to do is publish them, and new functionality can be exposed without the need to change the Microsoft APIs or any operating system code at all.

    Figure 3. Defining new GUIDs for value add sensors

    Using Sensor Manager Object


    In order for an app to use a sensor, Microsoft Sensor Framework needs a way to “bind” the object to actual hardware. It does this via Plug and Play, using a special object called the Sensor Manager Object.

    Ask by Type

    An app can ask for a specific type of sensor, such as Gyrometer3D. The Sensor Manager consults the list of sensor hardware present on the computer and returns a collection of matching objects bound to that hardware. Although the Sensor Collection may have 0, 1, or more objects, it usually has only one. Below is a C++ code sample illustrating the use of the Sensor Manager object’s GetSensorsByType method to search for 3-axis Gyros and return them in a Sensor Collection. Note that a ::CoCreateInstance() must be made for the Sensor Manager Object first.

    // Additional includes for sensors
    #include <InitGuid.h>
    #include <SensorsApi.h>
    #include <Sensors.h>
    // Create a COM interface to the SensorManager object.
    ISensorManager* pSensorManager = NULL;
    HRESULT hr = ::CoCreateInstance(CLSID_SensorManager, NULL, CLSCTX_INPROC_SERVER, 
        IID_PPV_ARGS(&pSensorManager));
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to CoCreateInstance() the SensorManager."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // Get a collection of all 3-axis Gyros on the computer.
    ISensorCollection* pSensorCollection = NULL;
    hr = pSensorManager->GetSensorsByType(SENSOR_TYPE_GYROMETER_3D, &pSensorCollection);
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to find any Gyros on the computer."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
     

    Ask by Category

    An app can request sensors by category, such as all motion sensors. The Sensor Manager consults the list of sensor hardware on the computer and returns a collection of motion objects bound to that hardware. The SensorCollection may have 0, 1, or more objects in it. On most computers, the collection will have two motion objects: Accelerometer3D and Gyrometer3D.

    The C++ code sample below illustrates the use of the Sensor Manager object’s GetSensorsByCategory method to search for motion sensors and return them in a sensor collection.

    // Additional includes for sensors
    #include <InitGuid.h>
    #include <SensorsApi.h>
    #include <Sensors.h>
    // Create a COM interface to the SensorManager object.
    ISensorManager* pSensorManager = NULL;
    HRESULT hr = ::CoCreateInstance(CLSID_SensorManager, NULL, CLSCTX_INPROC_SERVER, 
        IID_PPV_ARGS(&pSensorManager));
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to CoCreateInstance() the SensorManager."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // Get a collection of all 3-axis Gyros on the computer.
    ISensorCollection* pSensorCollection = NULL;
    hr = pSensorManager->GetSensorsByCategory(SENSOR_CATEGORY_MOTION, &pSensorCollection);
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to find any Motion sensors on the computer."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
     

    Ask by Category “All”

    In practice, it is most efficient for an app to request all of the sensors on the computer at once. The Sensor Manager consults the list of sensor hardware on the computer and returns a collection of all the objects bound to that hardware. The Sensor Collection may have 0, 1, or more objects in it. On most computers, the collection will have seven or more objects.

    C++ does not have a GetAllSensors call, so you must use GetSensorsByCategory(SENSOR_CATEGORY_ALL, …) instead as shown in the sample code below.

    C++ does not have a GetAllSensors call, so you must use GetSensorsByCategory(SENSOR_CATEGORY_ALL, …) instead as shown in the sample code below.
    // Additional includes for sensors
    #include <InitGuid.h>
    #include <SensorsApi.h>
    #include <Sensors.h>
    // Create a COM interface to the SensorManager object.
    ISensorManager* pSensorManager = NULL;
    HRESULT hr = ::CoCreateInstance(CLSID_SensorManager, NULL, CLSCTX_INPROC_SERVER, 
        IID_PPV_ARGS(&pSensorManager));
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to CoCreateInstance() the SensorManager."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // Get a collection of all 3sensors on the computer.
    ISensorCollection* pSensorCollection = NULL;
    hr = pSensorManager->GetSensorsByCategory(SENSOR_CATEGORY_ALL, &pSensorCollection);
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to find any sensors on the computer."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
     

    Sensor Life Cycle – Enter and Leave Events

    On Windows, as with most hardware devices, sensors are treated as Plug and Play devices. There are a few different scenarios where sensors can be connected/disconnected:

    1. It is possible to have USB-based sensors external to the system and plugged in to a USB port. 
    2. It is conceivable to have sensors that are attached by an unreliable wireless interface (such as Bluetooth*) or wired interface (such as Ethernet), where connects and disconnects are common.
    3. If a Windows Update upgrades the device driver for the sensors, they will appear to disconnect and then reconnect.
    4. When Windows shuts down (to S4 or S5), the sensors appear to disconnect.

    In the context of sensors, a Plug and Play connect is called an Enter event, and disconnect is called a Leave event. Resilient apps need to be able to handle both.

    Enter Event Callback

    If the app is already running at the time a sensor is plugged in, the Sensor Manager reports the sensor Enter event; however, if the sensors are already plugged in when the app starts running, this action will not result in Enter events for those sensors. In C++/COM, you must use the SetEventSinkmethod to hook the callback. The callback must be an entire class that inherits from ISensorManagerEvents and must implement IUnknown. Additionally, the ISensorManagerEvents interface must have callback function implementations for:

    	STDMETHODIMP OnSensorEnter(ISensor *pSensor, SensorState state);
    // Hook the SensorManager for any SensorEnter events.
    pSensorManagerEventClass = new SensorManagerEventSink();  // create C++ class instance
    // get the ISensorManagerEvents COM interface pointer
    HRESULT hr = pSensorManagerEventClass->QueryInterface(IID_PPV_ARGS(&pSensorManagerEvents)); 
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Cannot query ISensorManagerEvents interface for our callback class."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // hook COM interface of our class to SensorManager eventer
    hr = pSensorManager->SetEventSink(pSensorManagerEvents); 
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Cannot SetEventSink on SensorManager to our callback class."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    
    

    Code:Hook Callback for Enter event

    Below is the C++/COM equivalent of the Enter callback. All the initialization steps from the main loop would be performed in this function. In fact, it is more efficient to refactor the code so that the main loop merely calls OnSensorEnter to simulate an Enter event.

    STDMETHODIMP SensorManagerEventSink::OnSensorEnter(ISensor *pSensor, SensorState state)
    {
        // Examine the SupportsDataField for SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX.
        VARIANT_BOOL bSupported = VARIANT_FALSE;
        HRESULT hr = pSensor->SupportsDataField(SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX, &bSupported);
        if (FAILED(hr))
        {
            ::MessageBox(NULL, _T("Cannot check SupportsDataField for SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX."), 
                _T("Sensor C++ Sample"), MB_OK | MB_ICONINFORMATION);
            return hr;
        }
        if (bSupported == VARIANT_FALSE)
        {
            // This is not the sensor we want.
            return -1;
        }
        ISensor *pAls = pSensor;  // It looks like an ALS, memorize it. 
        ::MessageBox(NULL, _T("Ambient Light Sensor has entered."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONINFORMATION);
        .
        .
        .
        return hr;
    }
    
    

    Code:Callback for Enter event

    Leave Event

    The individual sensor (not the Sensor Manager) reports when the Leave event happens. This code is the same as the previous hook callback for an Enter event.

    // Hook the Sensor for any DataUpdated, Leave, or StateChanged events.
    SensorEventSink* pSensorEventClass = new SensorEventSink();  // create C++ class instance
    ISensorEvents* pSensorEvents = NULL;
    // get the ISensorEvents COM interface pointer
    HRESULT hr = pSensorEventClass->QueryInterface(IID_PPV_ARGS(&pSensorEvents)); 
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Cannot query ISensorEvents interface for our callback class."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    hr = pSensor->SetEventSink(pSensorEvents); // hook COM interface of our class to Sensor eventer
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Cannot SetEventSink on the Sensor to our callback class."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    
    

    Code: Hook Callback for Leave event

    The OnLeave event handler receives the ID of the leaving sensor as an argument.

    STDMETHODIMP SensorEventSink::OnLeave(REFSENSOR_ID sensorID)
    {
        HRESULT hr = S_OK;
        ::MessageBox(NULL, _T("Ambient Light Sensor has left."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONINFORMATION);
        // Perform any housekeeping tasks for the sensor that is leaving.
        // For example, if you have maintained a reference to the sensor,
        // release it now and set the pointer to NULL.
        return hr;
    }
    
    

    Code: Callback for Leave event

    Picking Sensors for an App


    Different types of sensors report different information. Microsoft calls these pieces of information Data Fields, and they are grouped together in a SensorDataReport. A computer may (potentially) have more than one type of sensor that an app can use. The app won’t care which sensor the information came from, so long as it is available.

    Table 4 shows the constant names for the most commonly used Data Fields for Win32/COM and.NET. Just like sensor identifiers, these constants are just human-readable names for their associated GUIDs.  This method of association provides for extensibility of Data Fields beyond those “well known” fields that Microsoft has pre-defined.

    Constant (Win32*/COM)

    Constant (.NET)

    PROPERTYKEY (GUID,PID)

    SENSOR_DATA_TYPE_TIMESTAMP

    SensorDataTypeTimestamp

    {DB5E0CF2-CF1F-4C18-B46C-D86011D62150},2

    SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX

    SensorDataTypeLightLevelLux

    {E4C77CE2-DCB7-46E9-8439-4FEC548833A6},2

    SENSOR_DATA_TYPE_ACCELERATION_X_G

    SensorDataTypeAccelerationXG

    {3F8A69A2-07C5-4E48-A965-CD797AAB56D5},2

    SENSOR_DATA_TYPE_ACCELERATION_Y_G

    SensorDataTypeAccelerationYG

    {3F8A69A2-07C5-4E48-A965-CD797AAB56D5},3

    SENSOR_DATA_TYPE_ACCELERATION_Z_G

    SensorDataTypeAccelerationZG

    {3F8A69A2-07C5-4E48-A965-CD797AAB56D5},4

    SENSOR_DATA_TYPE_ANGULAR_VELOCITY_X_DEGRE
    ES_PER_SECOND

    SensorDataTypeAngularVelocityXDegreesPerSecond

    {3F8A69A2-07C5-4E48-A965-CD797AAB56D5},10

    SENSOR_DATA_TYPE_ANGULAR_VELOCITY_Y_DEGRE
    ES_PER_SECOND

    SensorDataTypeAngularVelocityYDegreesPerSecond

    {3F8A69A2-07C5-4E48-A965-CD797AAB56D5},11

    SENSOR_DATA_TYPE_ANGULAR_VELOCITY_Z_DEGRE
    ES_PER_SECOND

    SensorDataTypeAngularVelocityZDegreesPerSecond

    {3F8A69A2-07C5-4E48-A965-CD797AAB56D5},12

    SENSOR_DATA_TYPE_TILT_X_DEGREES

    SensorDataTypeTiltXDegrees

    {1637D8A2-4248-4275-865D-558DE84AEDFD},2

    SENSOR_DATA_TYPE_TILT_Y_DEGREES

    SensorDataTypeTiltYDegrees

    {1637D8A2-4248-4275-865D-558DE84AEDFD},3

    SENSOR_DATA_TYPE_TILT_Z_DEGREES

    SensorDataTypeTiltZDegrees

    {1637D8A2-4248-4275-865D-558DE84AEDFD},4

    SENSOR_DATA_TYPE_MAGNETIC_HEADING_COMPEN
    SATED_MAGNETIC_NORTH_DEGREES

    SensorDataTypeMagneticHeadingCompen
    satedTrueNorthDegrees

    {1637D8A2-4248-4275-865D-558DE84AEDFD},11

    SENSOR_DATA_TYPE_MAGNETIC_FIELD_STRENGTH_
    X_MILLIGAUSS

    SensorDataTypeMagneticFieldStrengthXMilligauss

    {1637D8A2-4248-4275-865D-558DE84AEDFD},19

    SENSOR_DATA_TYPE_MAGNETIC_FIELD_STRENGTH_
    Y_MILLIGAUSS

    SensorDataTypeMagneticFieldStrengthYMilligauss

    {1637D8A2-4248-4275-865D-558DE84AEDFD},20

    SENSOR_DATA_TYPE_MAGNETIC_FIELD_STRENGTH_
    Z_MILLIGAUSS

    SensorDataTypeMagneticFieldStrengthZMilligauss

    {1637D8A2-4248-4275-865D-558DE84AEDFD},21

    SENSOR_DATA_TYPE_QUATERNION

    SensorDataTypeQuaternion

    {1637D8A2-4248-4275-865D-558DE84AEDFD},17

    SENSOR_DATA_TYPE_ROTATION_MATRIX

    SensorDataTypeRotationMatrix

    {1637D8A2-4248-4275-865D-558DE84AEDFD},16

    SENSOR_DATA_TYPE_LATITUDE_DEGREES

    SensorDataTypeLatitudeDegrees

    {055C74D8-CA6F-47D6-95C6-1ED3637A0FF4},2

    SENSOR_DATA_TYPE_LONGITUDE_DEGREES

    SensorDataTypeLongitudeDegrees

    {055C74D8-CA6F-47D6-95C6-1ED3637A0FF4},3

    SENSOR_DATA_TYPE_ALTITUDE_ELLIPSOID_METERS

    SensorDataTypeAltitudeEllipsoidMeters

    {055C74D8-CA6F-47D6-95C6-1ED3637A0FF4},5

    Table 4. Data Field identifier constants  

    One thing that makes Data Field identifiers different from sensor IDs is the use of a data type called PROPERTYKEY. A PROPERTYKEY consists of a GUID (similar to what sensors have), plus an extra number called a “PID” (property ID). You might notice that the GUID part of a PROPERTYKEY is common for sensors that are in the same category. Data Fields have a native data type for all of their values, such as Boolean, unsigned char, int, float, double, etc.

    In Win32/COM, the value of a Data Field is stored in a polymorphic data type called PROPVARIANT. In .NET, there is a CLR (Common Language Runtime) data type called “object” that does the same thing. The polymorphic data type will need to be queried and/or typecast to the “expected”/”documented” data type.

    The SupportsDataField()method of the sensor should be used to check the sensors for the Data Fields of interest. This is the most common programming idiom that is used to select sensors. Depending on the usage model of the app, only a subset of the Data Field may be required. Sensors that support the desired Data Fields should be selected. Type casting will be required to assign the sub-classed member variables from the base class sensor.

    ISensor* m_pAls;
    ISensor* m_pAccel;
    ISensor* m_pTilt;
    // Cycle through the collection looking for sensors we care about.
    ULONG ulCount = 0;
    HRESULT hr = pSensorCollection->GetCount(&ulCount);
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to get count of sensors on the computer."), _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    for (int i = 0; i < (int)ulCount; i++)
    {
        hr = pSensorCollection->GetAt(i, &pSensor);
        if (SUCCEEDED(hr))
        {
            VARIANT_BOOL bSupported = VARIANT_FALSE;
            hr = pSensor->SupportsDataField(SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX, &bSupported);
            if (SUCCEEDED(hr) && (bSupported == VARIANT_TRUE)) m_pAls = pSensor;
            hr = pSensor->SupportsDataField(SENSOR_DATA_TYPE_ACCELERATION_Z_G, &bSupported);
            if (SUCCEEDED(hr) && (bSupported == VARIANT_TRUE)) m_pAccel = pSensor;
            hr = pSensor->SupportsDataField(SENSOR_DATA_TYPE_TILT_Z_DEGREES, &bSupported);
            if (SUCCEEDED(hr) && (bSupported == VARIANT_TRUE)) m_pTilt = pSensor;
            .
            .
            .
        }
    }
    
    

    Code: Use of the SupportsDataField() method of the sensor to check for supported data field

    Sensor Properties

    In addition to Data Fields, sensors have Properties that can be used for identification and configuration. Table 5 shows the most commonly-used Properties. Just like Data Fields, Properties have constant names used by Win32/COM and .NET, and those constants are really PROPERTYKEY numbers underneath. Properties are extensible by vendors and also have PROPVARIANT polymorphic data types. Unlike Data Fields that are read-only, Properties have the ability to Read/Write. It is up to the individual sensor’s discretion as to whether or not it rejects Write attempts. Because no exception is thrown when a write attempt fails, a write-read-verification will need to be performed. 

    Identification
    (Win32*/COM)

    Identification
    (.NET)

    PROPERTYKEY (GUID,PID)

     

    SENSOR_PROPERTY_PERSISTENT_UNIQUE_ID

    SensorID

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},5

     

    WPD_FUNCTIONAL_OBJECT_CATEGORY

    CategoryID

    {8F052D93-ABCA-4FC5-A5AC-B01DF4DBE598},2

     

    SENSOR_PROPERTY_TYPE

    TypeID

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},2

     

    SENSOR_PROPERTY_STATE

    State

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},3

     

    SENSOR_PROPERTY_MANUFACTURER

    SensorManufacturer

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},6

     

    SENSOR_PROPERTY_MODEL

    SensorModel

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},7

     

    SENSOR_PROPERTY_SERIAL_NUMBER

    SensorSerialNumber

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},8

     

    SENSOR_PROPERTY_FRIENDLY_NAME

    FriendlyName

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},9

     

    SENSOR_PROPERTY_DESCRIPTION

    SensorDescription

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},10

     

    SENSOR_PROPERTY_MIN_REPORT_INTERVAL

    MinReportInterval

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},12

     

    SENSOR_PROPERTY_CONNECTION_TYPE

    SensorConnectionType

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},11

     

    SENSOR_PROPERTY_DEVICE_ID

    SensorDevicePath

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},15

     

    SENSOR_PROPERTY_RANGE_MAXIMUM

    SensorRangeMaximum

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},21

     

    SENSOR_PROPERTY_RANGE_MINIMUM

    SensorRangeMinimum

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},20

     

    SENSOR_PROPERTY_ACCURACY

    SensorAccuracy

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},17

     

    SENSOR_PROPERTY_RESOLUTION

    SensorResolution

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},18

     

    Configuration
    (Win32/COM)

    Configuration
    (.NET)

    PROPERTYKEY (GUID,PID)

    SENSOR_PROPERTY_CURRENT_REPORT_INTERVAL

    ReportInterval

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},13

    SENSOR_PROPERTY_CHANGE_SENSITIVITY

    ChangeSensitivity

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},14

    SENSOR_PROPERTY_REPORTING_STATE

    ReportingState

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},27

    Table 5. Commonly used sensor Properties and PIDs

    Setting Sensor Sensitivity

    The sensitivity setting is a very useful Property of a sensor. It can be used to assign a threshold that controls or filters the number of SensorDataReports sent to the host computer. In this way, traffic can be reduced: only send up those DataUpdated events that are truly worthy of bothering the host CPU. The way Microsoft has defined the data type of this Sensitivity property as a container type called IPortableDeviceValues in Win32/COM and SensorPortableDeviceValues in .NET. This container holds a collection of tuples, each of which is a Data Field PROPERTYKEY followed by the sensitivity value for that Data Field. The sensitivity always uses the same units of measure and data type as the matching Data Field.

    // Configure sensitivity
    // create an IPortableDeviceValues container for holding the <Data Field, Sensitivity> tuples.
    IPortableDeviceValues* pInSensitivityValues;
    hr = ::CoCreateInstance(CLSID_PortableDeviceValues, NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&pInSensitivityValues));
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to CoCreateInstance() a PortableDeviceValues collection."), _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // fill in IPortableDeviceValues container contents here: 0.1 G sensitivity in each of X, Y, and Z axes.
    PROPVARIANT pv;
    PropVariantInit(&pv);
    pv.vt = VT_R8; // COM type for (double)
    pv.dblVal = (double)0.1;
    pInSensitivityValues->SetValue(SENSOR_DATA_TYPE_ACCELERATION_X_G, &pv);
    pInSensitivityValues->SetValue(SENSOR_DATA_TYPE_ACCELERATION_Y_G, &pv);
    pInSensitivityValues->SetValue(SENSOR_DATA_TYPE_ACCELERATION_Z_G, &pv);
    // create an IPortableDeviceValues container for holding the <SENSOR_PROPERTY_CHANGE_SENSITIVITY, pInSensitivityValues> tuple.
    IPortableDeviceValues* pInValues;
    hr = ::CoCreateInstance(CLSID_PortableDeviceValues, NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&pInValues));
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to CoCreateInstance() a PortableDeviceValues collection."), _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // fill it in
    pInValues->SetIPortableDeviceValuesValue(SENSOR_PROPERTY_CHANGE_SENSITIVITY, pInSensitivityValues);
    // now actually set the sensitivity
    IPortableDeviceValues* pOutValues;
    hr = pAls->SetProperties(pInValues, &pOutValues);
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to SetProperties() for Sensitivity."), _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // check to see if any of the setting requests failed
    DWORD dwCount = 0;
    hr = pOutValues->GetCount(&dwCount);
    if (FAILED(hr) || (dwCount > 0))
    {
        ::MessageBox(NULL, _T("Failed to set one-or-more Sensitivity values."), _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    PropVariantClear(&pv);
    
    

    Requesting permissions for Sensors

    Some information provided by sensors may be considered sensitive, i.e., Personally Identifiable Information(PII). Data Fields such as the computer’s location (e.g., latitude and longitude), could be used to track the user. For this reason, Windows forces apps to get end-user permission to access the sensor. The State property of the sensor and the RequestPermissions() method of the SensorManager can be used as needed.

    The RequestPermissions() method takes an array of sensors as an argument, so an app can request permission for more than one sensor at a time. The C++/COM code is shown below. Note that (ISensorCollection *) must be provided as an argument to RequestPermissions().

    // Get the sensor's state
    
    SensorState state = SENSOR_STATE_ERROR;
    HRESULT hr = pSensor->GetState(&state);
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to get sensor state."), _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // Check for access permissions, request permission if necessary.
    if (state == SENSOR_STATE_ACCESS_DENIED)
    {
        // Make a SensorCollection with only the sensors we want to get permission to access.
        ISensorCollection *pSensorCollection = NULL;
        hr = ::CoCreateInstance(CLSID_SensorCollection, NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&pSensorCollection));
        if (FAILED(hr))
        {
            ::MessageBox(NULL, _T("Unable to CoCreateInstance() a SensorCollection."), 
                _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
            return -1;
        }
        pSensorCollection->Clear();
        pSensorCollection->Add(pAls); // add 1 or more sensors to request permission for...
        // Have the SensorManager prompt the end-user for permission.
        hr = m_pSensorManager->RequestPermissions(NULL, pSensorCollection, TRUE);
        if (FAILED(hr))
        {
            ::MessageBox(NULL, _T("No permission to access sensors that we care about."), 
                _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
            return -1;
        }
    }
    
    
     

    Sensor Data Update

    Sensors report data by throwing an event called a DataUpdated event. The actual Data Fields are packaged inside a SensorDataReport, which is passed to any attached DataUpdated event handlers. An app can obtain the SensorDataReport by hooking a callback handler to the sensor’s DataUpdated event. The event occurs in a Windows Sensor Framework thread, which is a different thread than the message-pump thread used to update the app’s GUI. Therefore, a “hand-off” of the SensorDataReport from the event handler (Als_DataUpdate) to a separate handler (Als_UpdateGUI) that can execute on the context of the GUI thread is required. In .NET, such a handler is called a delegate function.

    The example below shows preparation of the delegate function. In C++/COM, the SetEventSink method must be used to hook the callback. The callback cannot simply be a function; it must be an entire class that inherits from ISensorEvents and also implements IUnknown. The ISensorEvents interface must have callback function implementations for:

    	STDMETHODIMP OnEvent(ISensor *pSensor, REFGUID eventID, IPortableDeviceValues *pEventData);
    	STDMETHODIMP OnDataUpdated(ISensor *pSensor, ISensorDataReport *pNewData);
    	STDMETHODIMP OnLeave(REFSENSOR_ID sensorID);
    	STDMETHODIMP OnStateChanged(ISensor* pSensor, SensorState state);
    // Hook the Sensor for any DataUpdated, Leave, or StateChanged events.
    SensorEventSink* pSensorEventClass = new SensorEventSink();  // create C++ class instance
    ISensorEvents* pSensorEvents = NULL;
    // get the ISensorEvents COM interface pointer
    HRESULT hr = pSensorEventClass->QueryInterface(IID_PPV_ARGS(&pSensorEvents)); 
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Cannot query ISensorEvents interface for our callback class."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    hr = pSensor->SetEventSink(pSensorEvents); // hook COM interface of our class to Sensor eventer
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Cannot SetEventSink on the Sensor to our callback class."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    
    

    Code: Set a COM Event Sink for the sensor

    The DataUpdated event handler receives the SensorDataReport (and the sensor that initiated the event) as arguments. It calls the Invoke() method of the form to post those items to the delegate function. The GUI thread runs the delegate function posted to its Invoke queue and passes the arguments to it. The delegate function casts the data type of the SensorDataReport to the expected subclass, gaining access to its Data Fields. The Data Fields are extracted using the GetDataField()method of the SensorDataReport object. Each of the Data Fields has to be typecast to their “expected”/”documented” data types (from the generic/polymorphic data type returned by the GetDataField() method). The app can then format and display the data in the GUI.

    The OnDataUpdated event handler receives the SensorDataReport (and the sensor that initiated the event) as arguments. The Data Fields are extracted using the GetSensorValue()method of the SensorDataReport object. Each of the Data Fields needs to have their PROPVARIANT checked for their “expected”/”documented” data types. The app can then format and display the data in the GUI. It is not necessary to use the equivalent of a C# delegate. This is because all C++ GUI functions (such as ::SetWindowText() shown here) use Windows message-passing to post the GUI update to the GUI thread / message-loop (the WndProc of your main window or dialog box).

    STDMETHODIMP SensorEventSink::OnDataUpdated(ISensor *pSensor, ISensorDataReport *pNewData)
    {
        HRESULT hr = S_OK;
        if ((NULL == pNewData) || (NULL == pSensor)) return E_INVALIDARG;
        float fLux = 0.0f;
        PROPVARIANT pv = {};
        hr = pNewData->GetSensorValue(SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX, &pv);
        if (SUCCEEDED(hr))
        {
            if (pv.vt == VT_R4) // make sure the PROPVARIANT holds a float as we expect
            {
                // Get the lux value.
                fLux = pv.fltVal;
                // Update the GUI
                wchar_t *pwszLabelText = (wchar_t *)malloc(64 * sizeof(wchar_t));
                swprintf_s(pwszLabelText, 64, L"Illuminance Lux: %.1f", fLux);
                BOOL bSuccess = ::SetWindowText(m_hwndLabel, (LPCWSTR)pwszLabelText);
                if (bSuccess == FALSE)
                {
                    ::MessageBox(NULL, _T("Cannot SetWindowText on label control."), 
                        _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
                }
                free(pwszLabelText);
            }
        }
        PropVariantClear(&pv);
        return hr;
    }
    
    

     Properties of the SensorDataReport object can be referenced to extract Data Fields from the SensorDataReport. This only works for the .NET API and for “well known” or “expected” Data Fields of that particular SensorDataReport subclass. For the Win32/COM API, the GetDataField method must be used. It is possible to use “Dynamic Data Fields” for the underlying driver/firmware to “piggyback” any “extended/unexpected” Data Fields inside SensorDataReports. The GetDataField method is used to extract those.

    Using Sensors in Windows Store apps


    Unlike the Desktop mode, the WinRT Sensor API follows a common template for each of the sensors:

    • There is usually a single event called ReadingChanged that calls the callback with an xxxReadingChangedEventArgs containing a Reading object holding the actual data. The accelerometer is an exception; it also has a Shaken event.
    • The hardware-bound instance of the sensor class is retrieved using the GetDefault() method.
    • Polling can be done with the GetCurrentReading() method.

    Windows Store apps are often written either in JavaScript* or in C#. There are different language-bindings to the API, which result in a slightly different capitalization appearance in the API names and a slightly different way that events are handled. The simplified API is easier to use, and the pros and cons are listed in Table 6.

    Feature

    Pros

    Cons

    SensorManager

    There is no SensorManager to deal with. Apps use the GetDefault() method to get an instance of the sensor class.

    • It is not possible to search for arbitrary sensor instances. If more than one of a particular sensor type exists on a computer, you will only see the “first” one.
    • It is not possible to search for arbitrary sensor types or categories by GUID. Vendor value-add extensions are inaccessible.

    Events

    Apps only worry about the DataUpdated event.

    • Apps have no access to Enter, Leave, StatusChanged, or arbitrary event types. Vendor value-add extensions are inaccessible.

    Sensor properties

    Apps only worry about the ReportInterval property.

    • Apps have no access to the other properties, including the most useful one: Sensitivity.
    • Other than manipulating the ReportInterval property, there is no way for Windows Store apps to tune or control the flow rate of Data Reports.
    • Apps cannot access arbitrary Properties by PROPERTYKEY. Vendor value-add extensions are inaccessible.

    Data Report properties

    Apps only worry about a few, pre-defined Data Fields unique to each sensor.

    • Apps have no access to other Data Fields. If sensors “piggy-back” additional well-known Data Fields in a Data Report beyond what Windows Store apps expect, the Data Fields are inaccessible.
    • Apps cannot access arbitrary Data Fields by PROPERTYKEY. Vendor value-add extensions are inaccessible.
    • Apps have no way to query at run-time what Data Fields a sensor supports. It can only assume what the API predefines.

     Table 6. Sensor APIs for Metro Style Apps, pros and cons

    Summary


    Windows 8 APIs provide developers an opportunity to take advantage of sensors available on different platforms under both the traditional Desktop mode and the new Windows Store app interface. In this document, an overview was presented of the sensor APIs available to developers creating Windows 8 applications, focusing on the APIs and code samples for Desktop apps. Many of the new Windows 8 APIs were improved with the Windows 8.1 Operating System and this article provides links to many of the relevant samples provided on MSDN.

    Appendix


    Coordinate System for Different Form Factors
    The Windows API reports X, Y, and Z axes in a manner that is compatible with the HTML5 standard (and Android*). It is also called the “ENU” system because X faces virtual “East”, Y faces virtual “North”, and Z faces “Up.”

    To figure out the direction of rotation, use the “Right Hand Rule”:

       * Point the thumb of your right hand in the direction of one of the axes.
       * Positive angle rotation around that axis will follow the curve of your fingers.

    These are the X, Y, and Z axes for a tablet form-factor PC, or phone (left) and for a clamshell PC (right). For more esoteric form factors (for example, a clamshell that is convertible into a tablet), the “standard” orientation is when it is in the TABLET state.

    To develop a navigation application (e.g., 3D space game), a conversion from “ENU” systems in your program is required. This can be done by using matrix multiplication. Graphics libraries such as Direct3D* and OpenGL* have APIs for handling this.

    MSDN Resources


    About the Authors


    Gael Hofemeier
    Gael is a Software Engineer in the Developer Relations Division at Intel working with Business Client Technologies. Gael holds a BS in Math and an MBA, both from the University of New Mexico. Gael enjoys hiking, biking, and photography.

    Deepak Vembar, PhD
    Deepak Vembar is a Research Scientist in the Interaction and Experience Research (IXR) group at Intel Labs. His research interests are at the intersection of computer graphics and human computer interaction including areas of real-time graphics, virtual reality, haptics, eye-tracking, and user interaction. Prior to joining Intel Labs, Deepak was a Software Engineer in Software and Services Group (SSG) at Intel, where he worked with PC game developers to optimize their games for Intel platforms, delivered courses and tutorials on heterogeneous platform optimization, and created undergraduate coursework using game demos as an instructional medium for use in school curriculum. 

    Intel and the Intel logo are trademarks of Intel Corporation in the US and/or other countries.
    Copyright © 2012 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.


  • Microsoft Windows* 8
  • Microsoft Windows* 8 Desktop
  • Sensors
  • Laptop
  • Tablet
  • URL
  • Mixing Stylus and Touch Input on Windows* 8

    $
    0
    0

    Downloads


    Download Intel Touch vs Stylus [PDF 635KB]
    Download Touch and Stylus Application Source Code [ZIP 742 KB]

    Documentation for Developers Interested in Mixing Touch and Stylus Interactions


    This document covers the use of touch and stylus interactions in a user interface and briefly examines when to use one over the other. It demonstrates how to implement moving images onto a canvas using touch, mouse, and pen features. In addition, it discusses how to use the pen to capture ink and how to use touch to manipulate images. The examples are displayed in the context of a simple educational tool for manipulating images. The application is written in C# and is designed for Windows* 8 devices.

    Contents

    Introduction


    Alternatives to mouse and keyboard interfaces have broadened the style and scope of user interactions with software applications. Instead of replacing the traditional mouse and keyboard, hardware like touch screens and stylus pens augment traditional devices, giving users more intuitive and efficient ways of interacting with applications. Unlike mice and keyboards, which were forced into a jack-of-all-trades type status, touch gestures and pen input can focus on their strengths, while improving user interaction—and satisfaction—with their computing devices.

    While touch devices may be more common than styli, pen input is showing up on more and more smartphones, tablets, and convertible devices. One may be tempted to dismiss using a stylus as a lesser substitute for using a finger on a touch screen; however, each has its own place in the user interaction world.

    This paper covers the use of touch and stylus interaction in a user interface and offers some guidelines for choosing when to use one over the other. Using a sample application, we will demonstrate how to implement using touch, mouse, and pen to move images onto a canvas area, as well as how to use the pen to capture ink and how to use multi-touch gestures to move, rotate, and resize images. The goal is to help application designers and developers handle user interaction that mixes stylus- and finger-based touch events. Specifically, it is intended to demonstrate how to react to a user switching between touch and stylus.

    Our examples are set in the context of a simple educational tool designed to let users choose images from a palette and place them on a canvas. Once on the canvas, users can use touch and stylus to manipulate the images and add annotations to the canvas. The application is written in C# and is designed for Windows 8 devices.

    Choose the Right Tool for the Situation at Hand


    Touch Interaction

    With the proliferation of smartphones, tablets, and computers with touchscreens, touch interaction has become synonymous with today’s modern devices and for a good reason. For many of the day-to-day interactions users have with their devices, touch is hard to beat. Touch is easy to learn, convenient, natural, and its gestures can be very rich, enabling users to easily express intentions that would be very slow or awkward with any other interaction method.
    Touch strengths:

    • One of the biggest advantages of touch interactions is the ability to use multiple input points (fingertips) at the same time. Multi-touch enables a richer set of motions and gestures than the single point provided by a stylus.
    • Touch supports direct and natural interactions with the objects in the user interface. Through gestures such as tapping, dragging, sliding, pinching, and rotating, users can manipulate objects on the screen similar to physical objects.
    • Touch allows users to combine gestures to perform more than one action at a time (compound manipulations). For example, a user can rotate a picture while simultaneously moving it across the screen.
    • Users can interact with the application without the need to first pick up a device, like a mouse or a stylus.
    • Users rarely misplace their fingers or have their batteries go dead at the wrong time.

    Stylus Interaction

    Some people argue users have no need for a stylus; after all, most of us already come equipped with ten built-in touch devices and it is highly unlikely we will misplace any of them, as is often the case with a stylus. However, when it comes to precision and accuracy using a finger falls far short when compared to a stylus.

    Stylus strengths:

    • With a stylus you can select a single X/Y coordinate. The size of the contact area of a finger is too large to do this.
    • The shape and size of the contact area does not change during movement, unlike the tip of a finger.
    • It’s easier to keep the target object in one spot while holding the stylus (touch users naturally move their fingers even while trying to maintain a single location).
    • With a stylus, the user’s hand travels a shorter physical distance than the cursor on the screen. This means it is easier to perform a straight-line motion than with a finger.
    • Because the tip of the stylus does not obscure (occlude) the screen, the user interface can display a cursor to assist with targeting.
    • Similarly, styli do not occlude the target spot, making it easier for users to see where they are placing items.
    • A stylus can incorporate a 3-state model where it is on the screen, off of the screen, or near the screen (hovering). Hover can be used to display tooltips when the stylus passes over an item or to indicate which items can be selected. Touch does not have the concept of hover.
    • An application can utilize stylus pressure to add another dimension to the interaction. For example, the amount of pressure can be used by the application to define the width of a line being drawn on the screen.
    • Because of the smaller tip on a stylus, user interface controls can be placed in places that may be harder to reach than with a finger, such as close to the edge of the screen.

    Situations where the precision of a stylus is an advantage over touch:

    • Taking notes by hand (rather than using a keyboard)
    • Creating mathematical or scientific notes containing formulas
    • Drawing in a more natural way
    • Marking up documents
    • Recording signatures (digital ink)
    • Selecting small, closely spaced items
    • Precisely placing an item on an image board or screen
    • Being able to see where an item will be placed (because the target is not obscured by the pointing device)

    The Example Application


    In this paper we demonstrate the touch and stylus concepts and how to implement them through a simple educational application. This example lets users capture handwritten annotations about images they have placed on the screen.

    Imagine a bulletin board where users can place images anywhere on the board. The images can be moved, sized, and rotated. Using a stylus, users can make handwritten notes and diagrams on the board itself, outside of the pictures. For example, a user can write a caption below a picture or draw an arrow showing the relationship between two images.

    The application features a main drawing area (the bulletin board) in the center with a palette of pre-defined images and line colors along the edges. Users use touch and drag to move a copy of an image from the palette to the drawing area or to a new location on the area. The application supports standard multi-touch manipulations to move, rotate, and size the images on the board. Anything “drawn” on the board with a stylus will automatically appear directly on the board, in the color selected in the color palette. A “Clear” button removes the images and drawings from the drawing area so the user can start afresh.

    Figure 1: Sample Application with mixed touch and stylus interation

    Supported Actions

    The following table describes the user interactions for the sample application.

    ActionResult
    Touch and drag image to drawing areaPlace copy of the image on the drawing area
    Finger drag on existing image in drawing areaMove image
    Pinch on existing image in drawing areaDecrease size of image
    Spread on existing image in drawing areaIncrease size of image
    Two-finger rotation on existing image in drawing areaRotate image
    Touch color on color paletteSelect color to use when drawing a line
    Stylus draw on drawing areaDraw line on drawing area using current color
    Touch [Clear] buttonRemove images and drawings from drawing area

    Table 1.Supported Actions

    Development Environment

    This application is a Windows Store app, sporting the Windows 8 Modern Interface, suitable for any tablet, convertible, or Ultrabook™ device with touch and stylus support.

    Windows* 8
    LanguageC# and XAML
    .NET.Net for Windows Store apps
    IDEVisual Studio* 2012 for Windows 8

    Figure 2 Development environment

    Using Touch, Mouse, and Pen to Drop Images onto a Canvas


    We start by populating our image picker using the AddImage method. This method simply adds the provided image instance to the image picker (a canvas) and defines its starting location. We also make sure to update our custom slider’s height when adding a new image. A custom slider is necessary because a ScrollView would interfere with being able to drag the image.

    private void AddImage(Image img) 
    { 
            this.PickerStack.Children.Add(img); 
         
            double left = 7.5; 
            double top = 7.5 + (157.5 * m_images.Count); 
    
            Canvas.SetLeft(img, left); 
            Canvas.SetTop(img, top); 
    
            m_images.Add(img); 
            m_imagePositions.Add(img, new CanvasPosition(left, top)); 
    
            this.PickerStack.Height = top + 150.0; 
    
            img.PointerPressed += Image_PointerPressed; 
            img.PointerMoved += Image_PointerMoved; 
            img.PointerReleased += Image_PointerReleased; 
       
            UpdateSliderHeight(); 
    } 

    The first thing we need in order to be able to drop images onto the central canvas is to provide image dragging functionality. To do this, we add handlers for the image object’s pointer events and feed the resulting event data into a GestureRecognizer instance configured to look for translation gestures. Note that we do not check the pointer device’s type as we decided to allow the user to perform picture dropping using any pointer device. However, you can check the device’s type if you want to restrict certain actions to specific device types.

    void Image_PointerPressed(object sender, PointerRoutedEventArgs e)
    {
    	if (m_activeImage != null || m_sliderActive) return;
    	Image img = sender as Image;
    	if (img != null )
    	{
    		m_activeImage = img;
    		Canvas.SetZIndex(m_activeImage, 1);
    		m_activePosition = m_imagePositions[m_activeImage];
    		m_gestureRecognizer.ProcessDownEvent(e.GetCurrentPoint(img));
    		e.Handled = true;
    	}
    }
    
    void Image_PointerMoved(object sender, PointerRoutedEventArgs e) { ... }

    In addition to feeding the pointer released event to the gesture recognizer, we also evaluate the pointer’s position relative to the destination canvas, as well as to the picker. If the release event took place outside of the picker’s bounds and inside of the target canvas, we raise the PictureDropped event to let the target implementation know it should add a new image instance at the given position.

    void Image_PointerReleased(object sender, PointerRoutedEventArgs e)
    {
    	if (m_activeImage == null || m_sliderActive) return;
    	Image img = sender as Image;
    	if (img != null)
    	{
    		PointerPoint imgPoint = e.GetCurrentPoint(img);
    		m_gestureRecognizer.ProcessUpEvent(imgPoint);
    		m_gestureRecognizer.CompleteGesture();
    		e.Handled = true;
    
    		if (m_droppingTarget != null
                   && PictureDropped != null)
    		{
    		  PointerPoint canvasPoint = e.GetCurrentPoint(m_droppingTarget);
    		  PointerPoint pickerPoint = e.GetCurrentPoint(this);
    		  Rect canvasRect = new Rect(0.0, 0.0,
    		  this.DropTarget.ActualWidth, this.DropTarget.ActualHeight);
    		  Rect pickerRect = new Rect(0.0, 0.0, this.ActualWidth, this.ActualHeight);
    
    		  if (ContainedIn(canvasPoint, canvasRect) && !ContainedIn(pickerPoint, pickerRect)) 
    			{
    				Point imgPos = new Point(canvasPoint.Position.X -
    				imgPoint.Position.X, canvasPoint.Position.Y -
    				imgPoint.Position.Y);
    				this.PictureDropped(this, 
    				  new PictureDropEventArgs(img.Source,imgPos));
    			}
    		}
    		Canvas.SetZIndex(m_activeImage, 0);
    		m_activeImage = null;
    		m_activePosition = null;
    	}
    }

    Notice that when given pointer events occur, the GestureRecognizer instance translates them into manipulation started, updated, and completed events. We use those events to simply reposition the image using the static Canvas SetTop and SetLeft methods.

    void m_gestureRecognizer_ManipulationStarted(GestureRecognizer sender,
    	 ManipulationStartedEventArgs args)
    {
    	Point p = args.Cumulative.Translation;
    	Canvas.SetLeft(m_activeImage, m_activePosition.X + p.X);
    	Canvas.SetTop(m_activeImage, m_activePosition.Y + p.Y - m_itemOffset);
    }
    
    void m_gestureRecognizer_ManipulationUpdated(GestureRecognizer sender,
    	 ManipulationUpdatedEventArgs args)
    {
    	Point p = args.Cumulative.Translation;
    	Canvas.SetLeft(m_activeImage, m_activePosition.X + p.X);
    	Canvas.SetTop(m_activeImage, m_activePosition.Y + p.Y - m_itemOffset);
    }

    In contrast to the manipulation started and updated event handlers, the manipulation completed event handler just restores the image to its original position instead of using the event’s cumulative translation to reposition the image.

    ManipulationCompletedEventArgs args)
    {
    	Canvas.SetLeft(m_activeImage, m_activePosition.X);
    	Canvas.SetTop(m_activeImage, m_activePosition.Y - m_itemOffset);
    }

    We intentionally omitted the code showing how we set up the picker’s target canvas and handle picture drops, as this is beyond the scope of this document. To find out more about the implementation details, please review the sample application code.

    Using the Pen to Capture Ink


    First thing to note while capturing ink input in a Windows Store app using C#, is that there is no dedicated class for ink rendering. You do however get an InkManager class that is capable of translating raw pointer input into ink strokes you can use for rendering. The sample application provides a basic implementation of an ink renderer that can be used to render and clear ink strokes. For a more complete sample please review “Simplified ink sample (Windows 8.1)”, found at: http://code.msdn.microsoft.com/windowsapps/Input-simplified-ink-sample-11614bbf.

    With a working ink renderer implementation in place, all you have to do to capture ink input is handle the target’s pointer pressed, moved, released, entered, and exited events. Note that we use the same code to handle pointer pressed and entered events. The same goes for released and exited events. The trick while reusing those events is to make sure that the pointer is in contact with the digitizer using the IsInContact property.

    private void inkCanvas_PointerPressed(object sender, 
    	PointerRoutedEventArgs e)
    {
    	if (m_activePointerId != 0) return;
    	if (e.Pointer.PointerDeviceType == PointerDeviceType.Pen
                   && e.Pointer.IsInContact)
    	{
    		PointerPoint pointerPoint = e.GetCurrentPoint(this.inkCanvas);
    		m_renderer.StartRendering(pointerPoint, m_inkAttr);
    		m_inkMan.Mode = InkManipulationMode.Inking;
    		m_inkMan.ProcessPointerDown(pointerPoint);
    		m_activePointerId = e.Pointer.PointerId;
    		e.Handled = true;
    	}
    }

    In the pointer pressed event handler we first check if we already have an active pointer device used to capture ink input. If not, we check the device type to make sure we are dealing with a pen device and extract the PointerPoint instance relative to our ink input canvas. We then make our renderer implementation live ink input and pass the pointer point to our InkManager instance for processing. We also make sure to store the source’s pointer device id for future reference and mark the event as handled, to prevent it from being propagated to other UI items.

    private void inkCanvas_PointerMoved(object sender, PointerRoutedEventArgs e)
    {
    	if (m_activePointerId != e.Pointer.PointerId) return;
    	if (e.Pointer.PointerDeviceType == PointerDeviceType.Pen)
    	{
    		if (e.Pointer.IsInContact)
    		{
    			PointerPoint pointerPoint =
    			  e.GetCurrentPoint(this.inkCanvas);
    			m_renderer.UpdateStroek(pointerPoint);
     IList<PointerPoint> interPointerPoints = 
     e.GetIntermediatePoints(this.inkCanvas);
           for (int i = interPointerPoints.Count - 1; i >= 0; --i)
               m_inkMan.ProcessPointerUpdate(interPointerPoints[i]);
           e.Handled = true;
           }
           else
          {
              HandlePenUp(e);
          }
       }
    }
    	

    In the pointer moved event handler we check to see if the source pointer device ID matches the ID of the pointer device, which we use to capture ink input. We also check the pointer device type just in case, although if the active ID matches the source’s device ID, we can be pretty sure that the device is indeed a pen and could just as well skip the additional if statement. Next, we check that the device is indeed in contact with its digitizer. If it is not, we treat the event as a released event to prevent the application from becoming caught with an inactive pointer device, in some rare situations.

    With the device identified as a pen that is in contact with its digitizer, we simply pass in the device’s current position relative to our ink canvas to update the simplified live stroke. The thing to remember with pointer moved events, since we are not dealing with a real-time operating system, is that there is no guarantee that they will get fired for every single hardware pen position change. This does not mean however, that we cannot have a precise rendering of the user’s input. In turn, we simply use the event argument’s GetIntermediatePoints to get a collection of all the aggregated pointer position changes and push that into our InkManager for processing. Passing in all of the intermediate points will result in a much better representation of the actual strokes once we switch from live to permanent ink rendering.

    private void inkCanvas_PointerReleased(object sender,
      PointerRoutedEventArgs e)
    {
    	if (m_activePointerId != e.Pointer.PointerId) return;
    	if (e.Pointer.PointerDeviceType == PointerDeviceType.Pen)
    	{
    		HandlePenUp(e);
    	}
    }

    In the pointer released event handler, we again check if we are dealing with the device marked as the current active ink input device and check the device type just in case. The heavy lifting, however, has been moved to a private helper method to prevent any code duplication because, as you remember, the pointer moved event could also be interpreted as a released event in certain circumstances.

    private void HandlePenUp(PointerRoutedEventArgs e)
    {
    	PointerPoint pointerPoint = e.GetCurrentPoint(this.inkCanvas);
    	m_inkMan.ProcessPointerUp(pointerPoint);
    	m_renderer.FinishRendering(pointerPoint);
     IReadOnlyList<InkStroke> strokes = m_inkMan.GetStrokes();
        int lastStrokeIndex = strokes.Count - 1;
        if (lastStrokeIndex >= 0)
            m_renderer.AddPermaInk(strokes[lastStrokeIndex], m_inkAttr);
        m_activePointerId = 0;
        e.Handled = true;
    }
    

    In the HandlePenUp helper method we first extract the pointer up position relative to our ink canvas. We then pass the up pointer point for processing by our InkManager and make our ink renderer finish live ink rendering. We then get all the strokes handled by our InkManager instance and pass the last one to the ink renderer. The renderer uses this data to produce a detailed stroke rendering using Bezier curves.

    Using Touch to Manipulate Images


    As there are no convenience functions that would help us with generating a scale, rotation, and translation transformation matrix, the first thing we need when implementing image touch manipulations are some matrix helper functions. As you can see in the listings below the matrix helper functions are not very complicated and do a very good job at simplifying your manipulation code.

    private Matrix Rotation(double angle)
    {
    	double angnleRad = Rad(angle);
    	Matrix r = Matrix.Identity;
    	r.M11 = Math.Cos(angnleRad);
    	r.M21 = -Math.Sin(angnleRad);
    	r.M12 = Math.Sin(angnleRad);
    	r.M22 = Math.Cos(angnleRad);
    	return r;
    }
    
    private Matrix Translation(double x, double y)
    {
    	Matrix r = Matrix.Identity;
    	r.OffsetX = x;
    	r.OffsetY = y;
    	return r;
    }
    
    private Matrix Scale(double scale)
    {
    	Matrix r = Matrix.Identity;
    	r.M11 = scale;
    	r.M22 = scale;
    	return r;
    }
    
    private double Rad(double angle)
    {
    	return (Math.PI * angle) / 180.0;
    }
    
    private Matrix MatMull(Matrix a, Matrix b)
    {
    	Matrix r = Matrix.Identity;
    	r.M11 = (a.M11 * b.M11) + (a.M12 * b.M21);
    	r.M12 = (a.M11 * b.M12) + (a.M12 * b.M22);
    	r.M21 = (a.M21 * b.M11) + (a.M22 * b.M21);
    	r.M22 = (a.M21 * b.M12) + (a.M22 * b.M22);
    	r.OffsetX = (a.OffsetX * b.M11) + (a.OffsetY * b.M21) + b.OffsetX;
    	r.OffsetY = (a.OffsetX * b.M12) + (a.OffsetY * b.M22) + b.OffsetY;
    	return r;
    }

    With the matrix helper code now in place we can proceed to passing in the pointer points into a properly set-up GestureRecognizer instance. The code is mostly identical to the one used for dragging pictures from the image picker, the only difference is we must check the device type this time as we only plan to support touch. In addition, the pointer released handler is much simpler here as we do not have to deal with restoring the image to its original position and firing any custom events.

    private void m_image_PointerPressed(object sender, PointerRoutedEventArgs e)
    {
    	if (e.Pointer.PointerDeviceType == Windows.Devices.Input.PointerDeviceType.Touch)
    	{
    		e.Handled = true;
    		m_gestureRecognizer.ProcessDownEvent(e.GetCurrentPoint(m_image));
    	}
    	else
    	{
    		e.Handled = false;
    	}
    }
    
    private void m_image_PointerMoved(object sender,PointerRoutedEventArgs e)
    { ... }
    
    private void m_image_PointerReleased(object sender, PointerRoutedEventArgs e)
    {
    	if (e.Pointer.PointerDeviceType == Windows.Devices.Input.PointerDeviceType.Touch)
    	{
    		e.Handled = true;
    		m_gestureRecognizer.ProcessUpEvent(e.GetCurrentPoint(m_image));
    		m_gestureRecognizer.CompleteGesture();
    	}
    	else
    	{
    		e.Handled = false;
    	}
    }

    With the pointer events handled it is now time to consume the gesture recognizer manipulation started, updated, and completed events. Once we get a manipulation started event, we begin by setting the images z index to 2, so it is rendered over any other image in the canvas and is first in line for pointer events. We then make sure to store the item’s current transformation matrix and proceed to creating the new scale, rotation, and translation matrices using the previously defined helper functions. Once we have all of the individual matrices calculated, we combine them using the MatMull helper function and use the resulting matrix to set up the image’s transformation.

    private void m_gestureRecognizer_ManipulationStarted(
      GestureRecognizer sender, ManipulationStartedEventArgs args)
    {
    	Canvas.SetZIndex(m_image, 2);
    	m_inMatrix = (m_image.RenderTransform as MatrixTransform).Matrix;
    	Matrix scaleMatrix = Scale(args.Cumulative.Scale);
    	Matrix rotationMatrix = Rotation(args.Cumulative.Rotation);
    	Matrix translationMatrix = Translation(args.Cumulative.Translation.X,
    	  args.Cumulative.Translation.Y);
    
    	Matrix mat = MatMull(MatMull(MatMull(MatMull(m_originToTranslation,
    	  scaleMatrix), translationMatrix),rotationMatrix),
    	  m_originFromTranslation);
    	(m_image.RenderTransform as MatrixTransform).Matrix = 
    	  MatMull(mat, m_inMatrix);
    }
    
    private void m_gestureRecognizer_ManipulationUpdated(
      GestureRecognizer sender, ManipulationUpdatedEventArgs args)
    { … }
    
    private void m_gestureRecognizer_ManipulationCompleted(
      GestureRecognizer sender, ManipulationCompletedEventArgs args)
    {
    	Canvas.SetZIndex(m_image, 1);
    	Matrix scaleMatrix = Scale(args.Cumulative.Scale);
    	Matrix rotationMatrix = Rotation(args.Cumulative.Rotation);
    	Matrix translationMatrix = Translation(args.Cumulative.Translation.X,
    	  args.Cumulative.Translation.Y);
    
    	Matrix mat = MatMull(MatMull(MatMull(MatMull(m_originToTranslation,
    	  scaleMatrix), translationMatrix), rotationMatrix),
    	  m_originFromTranslation);
    	(m_image.RenderTransform as MatrixTransform).Matrix = 
    	  MatMull(mat, m_inMatrix);
    }

    The manipulation completed event handler has the same flow as its started and updated counterparts, with the small exception that it resets the image’s z-index to stop the image from being rendered on top of all other items and possibly hijacking their pointer events.

    Avoiding Problems with Multiple Input Methods and Multiple Interface Items


    In summary, here are some hints on avoiding problems when handling multiple input devices across user interface items:

    • Always remember to set the pointer event’s handled property to true if you do not want to propagate it to other items.
    • Try to keep track of the pointer entered and exited events; some pointer devices, for example graphical tablets, get different pointer IDs when the stylus goes in and out of range.
    • Remember to call CompleteGesture on your GestureRecognizer objects when feasible. Sometimes when working with multi-touch, pointers may abruptly leave the item’s scope, resulting in a dirty GestureRecognizer state. Calling CompleteGesture will help you restore the GestureRecognizer state to a clean condition and avoid future manipulation glitches.
    • Remember that some stylus drivers block touch input while in use; this helps in cases where users rest their hands on the screen while using the stylus. In general, assume that stylus input takes priority.

    Closing


    Touch, pen, keyboard, and mouse inputs are all valid ways of interacting with an application, each has strengths and weaknesses. This paper focused on touch and stylus interactions, noting that touch offers a natural, easy to learn, direct manipulation style of interaction that enables users to combine gestures to express more than one command simultaneously. In contrast, stylus interactions work well when the application needs more accuracy than touch or when the user needs to write or draw.

    As demonstrated, incorporating mixed touch and stylus interaction in an application is not difficult, providing a few simple guidelines are followed and you keep track of what component is handling each event. You can download the full source for the demo application at and then try it yourself, or use it as reference material to create your own touch/stylus app.

    Intel, the Intel logo, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.
    Copyright © 2013 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

  • ULTRABOOK™
  • applications
  • sensor
  • Developers
  • Microsoft Windows* 8
  • C#
  • Intermediate
  • Microsoft Windows* 8 Desktop
  • Microsoft Windows* 8 Style UI
  • Sensors
  • Touch Interfaces
  • User Experience and Design
  • Laptop
  • Tablet
  • License Agreement: 

  • URL
  • Mystic Blocks Brings the Magic of Gesture Recognition and Facial Analysis to Desktop Gaming

    $
    0
    0

    Case Study: Mystic Blocks Brings the Magic of Gesture Recognition and Facial Analysis to Desktop Gaming

    By Erin Wright

    Downloads


    Case Study Mystic Blocks [PDF 849KB]

    Developer Matty Hoban of Liverpool, England, is always looking for innovative ways to integrate his love of mathematics, physics, and software development with next-generation technology. So he welcomed the opportunity to participate in Phase 1 of the Intel® Perceptual Computing Challenge.

    The Intel Perceptual Computing Challenge invited coders to push the boundaries of the Intel® Perceptual Computing software development kit (SDK) and Creative* Interactive Gesture Camera, which together offer significant advancements in human–computing interactions, including:

    • Speech recognition
    • Close-range depth tracking (gesture recognition)
    • Facial analysis
    • Augmented reality

    Hoban is currently finishing his computing degree at the Open University. He is also the founder of Elefant Games, which develops tools for game developers in addition to the Bomb Breaker app for Windows* desktops and touch screens.

    Preparation

    After entering the Challenge, Hoban looked to the Perceptual Computing SDK and Creative Interactive Gesture Camera for inspiration. He explains, “I wanted to get a feel for them. I felt it wasn’t enough to take an existing idea and try to make it work with a perceptual camera. Whatever I made, I knew that it had to work best with this camera over all possible control methods.”

    Testing the Gesture Camera and Perceptual Computing SDK

    Hoban began by testing the capabilities of the Creative Interactive Gesture Camera: “The first thing I did, as anyone would do, was try the sample that comes with it. This lets you see that the camera is working, and it gives back real-time variables of angles for your head and the position of your hands.”

    Hoban then ran sample code through the Perceptual Computing SDK. He says, “Capturing hand and head movements is simple. There are multiple ways of utilizing the SDK: You can use call-backs, or you can create the SDK directly to get the data you need.”

    Prototyping with Basic Shapes

    After getting familiar with the Gesture camera and the SDK, Hoban began manipulating basic shapes using the gesture-recognition abilities of the close-range, depth-tracking usage mode. He says, “Once I looked at the samples and saw that the real-world values for your hands returned well, I started to get an idea for the game.”

    He developed a method of creating block-based geometric shapes using three two-dimensional matrices populated with ones and zeroes. Each matrix represents the front, bottom, or side of an individual shape. This method eliminated the need for three-dimensional (3D) software and expedited the process of generating shapes within the game. Figure 1 shows examples of the shape matrices.

    Figure 1.Constructing shapes with matrices

    With the Gesture camera and shape matrices in place, Hoban added facial analysis to track head position in relation to the visual perspective on the screen—and Mystic Blocks was born.

    Developing Mystic Blocks

    Mystic Blocks is a magician-themed puzzle game that requires players to use hand motions to turn keys to fit approaching locks, as shown in Figure 2. The keys are a variety of 3D shapes the matrix method described above generates.

    Figure 2.Mystic Blocks level 1

    “I’ve compared Mystic Blocks to the Hole in the Wall game show, where contestants need to position themselves correctly to fit an approaching hole,” explains Hoban. “Mystic Blocks does the same but with 3D geometry that uses rotation to allow the same shape to fit through many different-shaped holes.”

    Players begin by turning the keys with one hand, but as the game progresses, they have to coordinate both hands to move the keys in a 3D space. In addition to mastering hand coordination, players must repeat each sequence from memory on the second try as the locks approach with hidden keyholes. If players want a better view of the approaching locks, they can shift the game’s perspective by moving their heads from side to side. To see Mystic Blocks in action, check out the demonstration video at http://www.youtube.com/watch?v=XUqhcI_4nWo.

    Close-range Depth Tracking (Gesture Recognition)

    Mystic Blocks combines two usage modes from the Perceptual Computing SDK: close-range depth tracking and facial analysis. Close-range depth tracking recognizes and tracks hand positions and gestures such as those used in Mystic Blocks.

    Opportunities and Challenges of Close-range Depth Tracking

    Hoban found creative solutions for two challenges of close-range depth tracking: detection boundaries and data filtering.

    Detection Boundaries

    Mystic Blocks gives players text instructions to hold their hands in front of the camera. Although players are free to determine their own hand positions, Hoban’s usability tests revealed that hand motions are detected most accurately when players hold their palms toward the camera, with fingers slightly bent as if about to turn a knob, as demonstrated in Figure 3.

    Figure 3.Mystic Blocks hand position for gesture recognition

    Additional usability tests showed that players initially hold their hands too high above the camera. Therefore, a challenge for developers is creating user interfaces that encourage players to keep their hands within the detection boundaries.

    Currently, Mystic Blocks meets this challenge with graphics that change from red to green as the camera recognizes players’ hands, as shown in Figure 4.

    Figure 4.Mystic Blocks hand recognition alert graphics

    “I’d like to add a visual mechanism to let the user know when his or her hand strays out of range as well as some demonstrations of the control system,” notes Hoban. “I think that as the technology progresses, we’ll see standard gestures being used for common situations, and this will make it easier for users to know instinctively what to do.”

    Yet, even without these standardized movements, Hoban’s adult testers quickly adapted to the parameters of the gesture-based control system. The only notable control issue arose when a seven-year-old tester had difficulty turning the keys; however, Hoban believes that he can make the game more child friendly by altering variables to allow for a wider variety of hand rotations. He says, “I have noticed improvements in the Perceptual Computing SDK since I developed Mystic Blocks with the beta version, so I am confident that the controls can now be improved significantly.”

    Data Filtering

    During user testing, Hoban noticed that the hand-recognition function would occasionally become jumpy. He reduced this issue and improved the players’ ability to rotate the keys by filtering the incoming data. Specifically, the game logic ignores values that stray too far out of the established averages.

    In the future, Hoban would like to leverage the flexibility of the Perceptual Computing SDK to fine-tune the filters even further. For instance, he wants to enhance the game’s ability to distinguish between left and right hands and increase gesture recognition performance in bright, outdoor light.

    Head Tracking

    The Perceptual Computing SDK facial analysis usage mode can track head movements like those Mystic Blocks players use to adjust their visual perspectives. Hoban says, “The head tracking was simple to add. Without it, I would need to offset the view by a fixed distance, because the player’s view is directly behind the shape, which can block the oncoming keyhole.”

    Mystic Blocks’ head tracking is primarily focused on side-to-side head movements, although up and down movements can also affect the onscreen view to a lesser extent. This lets players find their most comfortable position and add to their immersion in the game. “If you’re looking directly towards the camera, you’ll have the standard view of the game,” explains Hoban. “But if you want to look around the corner or look to the side of the blocks to see what’s coming, you just make a slight head movement. The camera recognizes these movements and the way you see the game changes.”

    Sampling Rate

    The Creative Interactive Gesture Camera provides Hoban with a sampling rate of 30 fps. The Mystic Blocks application, which runs at 60 fps, can process gesture recognition and head tracking input as it becomes available. Hoban states, “The Gesture Camera is responsive, and I am quite impressed with how quickly it picks up the inputs and processes the images.”

    Third-party Technology Integration

    Mystic Blocks incorporates The Game Creators (TGC) App Game Kit with Tier 2 C++ library for rendering and the NVIDIA PhysX* SDK for collision detection and physics. Hoban also used several third-party development tools, including Microsoft Visual Studio* 2010, TGC 3D World Studio, and Adobe Photoshop*.

    These third-party resources integrated seamlessly with the Intel® Perceptual Computing technology. Hoban reports, “You just launch the camera and fire up Visual Studio. Then, you can call the library from the SDK and work with some example code. This will give you immediate results and information from the camera.”

    Figure 5 outlines the basic architecture behind Mystic Blocks in relation to the Gesture Camera.

    Figure 5.Mystic Blocks architecture diagram

    The Ultrabook™ Experience

    Mystic Blocks was developed and tested on an Ultrabook™ device with an Intel® Core™ i7-3367U CPU, 4 GB of RAM, 64-bit operating system, and limited touch support with five touch points. Hoban comments, “There were no problems with power or graphics. It handled the camera and the game, and I never came up against any issues with the Ultrabook.”

    The Future of Perceptual Computing

    Hoban believes that perceptual computing technologies will be welcomed by gamers and nongamers alike: “I don’t see it taking over traditional keyboards, but it will fit comfortably alongside established controls within most apps—probably supporting the most common scenarios, such as turning the page or going to the next screen with a flick of your hand. Devices will also be able to recognize your face, conveniently knowing your settings.”

    According to Hoban, gesture recognition is a perfect control system for motion-based games like Mystic Blocks; however, game developers will need to strike a balance between perceptual computing and traditional keyboard control methods in complex games with numerous options. “If you take your hands away from the camera to use the keyboard, you might lose focus on what you’re doing,” he comments. Instead, he advises developers to enrich complex games with gesture recognition for specific actions, such as casting spells or using a weapon.

    Facial analysis and voice recognition offer additional opportunities to expand and personalize gaming control systems. For example, Hoban predicts that facial analysis will be used to automatically log in multiple players at once and begin play exactly where that group of players left off, while voice recognition will be used alongside keyboards and gesture recognition to perform common tasks, such as increasing power, without interrupting game play.

    “I would like to add voice recognition to Mystic Blocks so that you could say ‘faster’ or ‘slower’ to speed up or slow down the game, because right now you can’t press a button without losing hand focus in the camera,” notes Hoban.

    And the Winner Is...

    Matty Hoban’s groundbreaking work with Mystic Blocks earned him a grand prize award in the Intel Perceptual Computing Challenge Phase 1. He is currently exploring opportunities to develop Mystic Blocks into a full-scale desktop game, while not ruling out the possibility of releasing the game on Apple iOS* and Google Android* devices. “Mystic Blocks is really suited to the camera and gesture inputs,” he says. “It will transfer to other devices, but if I develop it further, it will primarily be for perceptual computing on the PC.”

    In the meantime, Hoban has entered the Intel Perceptual Computing Challenge Phase 2 with a new concept for a top-down racing game that will allow players to steer vehicles with one hand while accelerating and braking with the other hand.

    Summary

    Matty Hoban’s puzzle game Mystic Blocks won a grand prize in the Intel Perceptual Computing Challenge Phase 1. Mystic Blocks gives players the unique opportunity to move shapes in a 3D space using only hand gestures. Players also have the ability to control the game’s visual perspective by moving their heads from side to side. During development, Hoban created his own innovative method of filtering data through the Perceptual Computing SDK and Creative Interactive Gesture Camera. He also gained valuable insight into the process of helping players adapt to gesture recognition and facial analysis.

    For More Information

    About the Author

    Erin Wright, M.A.S., is an independent technology and business writer in Chicago, Illinois.

    Intel, the Intel logo, Ultrabook, and Core are trademarks of Intel Corporation in the U.S. and/or other countries.
    Copyright © 2013 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

  • Intel® Perceptual Computing Challenge.
  • ULTRABOOK™
  • Gesture Recognition
  • applications
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Beginner
  • Perceptual Computing
  • Microsoft Windows* 8 Desktop
  • Sensors
  • User Experience and Design
  • Laptop
  • URL
  • Perceptual Computing: Practical Hands-Free Gaming

    $
    0
    0

    Download Article

    Perceptual Computing: Practical Hands-Free Gaming [PDF 772KB]

    1. Introduction

    The concept of a hands-free game is not new, and many unsuccessful attempts have been made to abandon peripheral controllers to rely solely on the human body as an input device. Most of these experiments came from the console world, and only in the last few years have we seen controllerless systems gain significant traction.


    Figure 1:The Nintendo U-Force – State of the Art Hands Free Gaming in 1989

    Early attempts at hands-free gaming were usually highly specialized peripherals that only applied to a handful of compatible games. The Sega Genesis peripheral called Sega Activator was a good example of this, being an octagonal ring placed on the floor with the player standing in its center. Despite the advertised ninja-style motions, the device controls simply mapped to 16 buttons and produced restrictive game play causing its silent demise.


    Figure 2:The Sega Activator – an early infra-red floor ring for the Genesis console

    More recent attempts such as the Sony Eye Toy* and Xbox* Live Vision gained further traction and captured the public’s imagination with the promise of hands-free control, but they failed to gain support from the developer community and only a few dozen hands-free games were produced.


    Figure 3:The Sony Eye Toy* – an early current generation attempt at controllerless gaming

    As you can see, hands-free technology has been prevalent in the console world for many years, and only recently have we seen widespread success in such devices thanks to the Xbox Kinect*. With the introduction of Perceptual Computing, hands-free game control is now possible on the PC and with sufficient accuracy and performance to make the experience truly immersive.

    This article provides developers with an overview of the topic, design considerations, and a case study of how one such game was developed. I’m assuming you are familiar with theCreative* Interactive Gesture camera and the Intel® Perceptual Computing SDK. Although the code samples are given in C++, the concepts explained are applicable to Unity* and C# developers as well. It is also advantageous if you have a working knowledge of extracting and using the depth data generated by the Gesture camera.

    2. Why Is This Important

    It is often said by old-school developers that there are only about six games in the world, endlessly recreated with new graphics and sound, twists and turns in the story, and of course, improvements in the technology. When you start to break down any game into its component parts, you start to suspect this cynical view is frighteningly accurate. The birth and evolution of the platform genre was in no small way influenced by the fact the player had a joystick with only four compass directions and a fire button.

    Assuming then that the type of controller used influences the type of games created, imagine what would happen if you were given a completely new kind of controller, one that intuitively knew what you were doing, as you were doing it. Some amazing new games to play would be created that would open the doors to incredible new gaming experiences.

    3. The Question of Determinism

    One of the biggest challenges facing hands-free gaming and indeed Perceptual Computing in general is the ability for your application or game to determine what the user intends to do, 100% of the time. A keyboard where the A key failed to respond 1% of the time, or a mouse that selects the right button randomly every fifteen minutes would be instantly dismissed as faulty and replaced. Thanks to our human interface devices, we now expect 100% compliance between what we intend and what happens on screen.

    Perceptual Computing can provide no less. Given the almost infinite combination of input pouring in through the data streams, we developers have our work cut out! A mouse has a handful of dedicated input signals, controllers have a few times that, and keyboards more so. A Gesture Camera would feed in over 25,000 times more data than any traditional peripheral controller, and there is no simple diagram to tell you what any of it actually does.

    As tantalizing as it is to create an input system that can scan the player and extract all manner of signals from them, the question is can such a signal be detected 100% of the time? If it’s 99%, you must throw it out or be prepared for a lot of angry users!

    4. Overview of the Body Mass Tracker technique

    One technique that can be heralded as 100% deterministic is the Body Mass Tracker technique, which was featured in one of my previous articles at http://software.intel.com/en-us/articles/perceptual-computing-depth-data-techniques.

    By using the depth value as a weight against cumulatively adding together the coordinates of each depth pixel, you can arrive at a single coordinate that indicates generally at which side of the camera the user is located. That is, when the user leans to the left, your application can detect this and provide a suitable coordinate to track them. When they lean to the right, the application will continue to follow them. When the user leans forward, this too is tracked. Given that the sample taken is absolute, individual details like hand movements, background objects, and other distractions are absorbed into a “whole view average.”

    The code is divided into two simple steps. The first will average all the value depth pixel coordinates to produce a single coordinate, and the second will draw the dot to the camera picture image render so we can see if the technique works. When run, you will see the dot center itself around the activity of the depth data.

    // find body mass center
    int iAvX = 0;
    int iAvY = 0;
    int iAvCount = 0;
    for (int y=0;y<(int)480;y++) 
    {
     for (int x=0;x<(int)640;x++) 
     {
      int dx=g_biguvmap[(y*640+x)*2+0];
      int dy=g_biguvmap[(y*640+x)*2+1];
      pxcU16 depthvalue = ((pxcU16*)ddepth.planes[0])[dy*320+dx];
      if ( depthvalue<65535/5 ) 
      {
       iAvX = iAvX + x;
       iAvY = iAvY + y;
       iAvCount++;
      }
     }
    }
    iAvX = iAvX / iAvCount;
    iAvY = iAvY / iAvCount;
    
    // draw body mass dot
    for ( int drx=-8; drx<=8; drx++ )
     for ( int dry=-8; dry<=8; dry++ )
      ((pxcU32*)dcolor.planes[0])[(iAvY+dry)*640+(iAvX+drx)]=0xFFFFFFFF;
    

    In Figure 4 below, notice the white dot has been rendered to represent the body mass coordinate. As the user leans right, the dot respects the general distribution by smoothly floating right, when he leans left, the dot smoothly floats to the left, all in real-time.


    Figure 4:The white dot represents the average position of all relevant depth pixels

    As you can see, the technique itself is relatively simple, but the critical point is that the location of this coordinate will be predictable under all adverse conditions that may face your game when in the field. People walking in the background, what you’re wearing, and any subtle factors being returned from the depth data stream are screened out. Through this real-time distillation process, what gets produced is pure gold, a single piece of input that is 100% deterministic.

    5. The Importance of Calibration

    There’s no way around it—your game or application has to have a calibration step. A traditional controller is hardwired for calibration and allows the user to avoid the mundane task of describing to the software which button means up, which one means down, and so on. Perceptual Computing calibration is not quite as radical as defining every function of input control to your game, but its healthy to assume this is the case.

    This step is more common sense than complicated and can be broken down into a few simple reminders that will help your game help its players.

    Camera Tilt– The Gesture camera ships with a vertical tilt mechanism that allows the operator to angle the camera to face up or down by a significant degree. Its lowest setting can even monitor the keyboard instead of the person sitting at the desk. It is vital that your game does not assume the user has the camera in the perfect tilt position. It may have been knocked out of alignment or recently installed. Alternatively, your user may be particularly tall or short, in which case they need to adjust the camera so they are completely in the frame.

    Background Depth Noise– If you are familiar with the techniques of filtering the depth data stream coming from the Gesture camera, you will know the problems with background objects interfering with your game. This is especially true at exhibits and conventions where people will be watching over the shoulder of the main player. Your game must be able to block this background noise by specifying a depth level beyond which the depth objects are ignored. As the person will be playing in an unknown environment, this depth level must be adjustable during the calibration step, and ideally using a hands-free mechanism.

    For a true hands-free game, it’s best not to resort to using traditional control methods to “setup” your game, as this defeats the object of a completely hands-free experience. It might seem paradoxical to use hands-free controls to calibrate misaligned hands-free controls, but on-screen prompts and visual hints direct from the depth/color camera should be enough to orient the user. Ideally the only hands-on activity when using your game is tilting the camera when you play the game for the first time.

    6. The UX and UI of Hands-Free Gaming

    Perceptual Computing is redefining the application user experience, dropping traditional user interfaces in favor of completely new paradigms to bridge the gap between human and machine. Buttons, keys, and menus are all placeholder concepts, constructed to allow humans to communicate with computers.

    When designing a new UI specific for hands-free gaming, you must begin by throwing away these placeholders and start with a blank canvas. It would be tempting to study the new sensor technologies and devise new concepts of control to exploit them, but we would then make the same mistakes as our predecessors.

    You must begin by imagining a user experience that is as close to the human conversation as possible, with no constraints imposed by technology. Each moment you degrade the experience for technical reasons, you’ll find your solution degenerate into a control system reminiscent of traditional methods. For example, using your hand to control four compass directions might seem cool, but it’s just a novel transliteration of a joystick, which in itself was a crude method of communicating the desires of the human to the device. In the real world, you simply walked forward, or in the case of the third person, speak instructions sufficiently detailed to achieve the objective.

    As developers, we encounter technical constraints all the time, and it’s tempting to ease the UX design process by working within these constraints. My suggestion is that your designs begin with blue-sky thinking, and meet any technical constraints as challenges. As you know, the half-life of a problem correlates to the hiding powers of the solution, and under the harsh gaze of developers, no problem survives for very long.

    So how do we translate this philosophy into practical steps and create great software? A good starting point is to imagine something your seated self can do and associate that with an intention in the software.

    The Blue Sky Thought

    Imagine having a conversation with an in-game character, buying provisions, and haggling with the store keeper. Imagine the store keeper taking note of which items you are looking at and starting his best pitch to gain a sale. Imagine pointing at an item on a shelf and saying “how much is that?” and the game unfolding into a natural conversation. We have never seen this in any computer game and yet it’s almost within reach, barring a few technical challenges.

    It was in the spirit of this blue-sky process that I contemplated what it might be like to swim like a fish, no arms or legs, just fins, drag factors, nose direction, and a thrashing tail. Similar to the feeling a scuba diver has, fishy me could slice through the water, every twist of my limbs causing a subtle change in direction. This became the premise of my control system for a hands-free game you will learn about later in this article.

    Player Orientation

    When testing your game, much like the training mode of a console game, you must orient the player in how to play the game from the very first moment. With key, touch, and controller games, you rightly assume the majority of your audience will have a basic toolbox of knowledge to figure out how to play your game. Compass directions, on screen menus, and action buttons are all common instruments we use to navigate any game. Hands-free gaming throws most of that away, which means in addition to creating a new paradigm for game control we also need to explain and nurture the player through these new controls.

    A great way to do this is to build it into the above calibration step, so that the act of setting up the Gesture camera and learning the player’s seated position is also the process of demonstrating how the game controls work.

    Usability Testing

    Unlike most usability tests, when testing a hands-free game, additional factors come into play that would not normally be an issue on controller-based games. For example, even though pressing the left-pad left would be universal no matter who is playing your game, turning your head left might not have the same clear-cut response. That is not to say you have breached the first rule of 100% determinism, but that the instructions you gave and the response of the human player may not tally up perfectly. Only by testing your game with a good cross section of users will you be able to determine whether your calibration and in-game instructions are easy to interpret and repeat without outside assistance.

    The closest equivalent to traditional considerations is to realize that a human hand cannot press all four action buttons at once in a fast-paced action game, due to the fact you only have one thumb available and four buttons. Perhaps after many months of development, you managed such a feat and it remained in the game, but testing would soon chase out such a requirement. This applies more so to hands-free game testing, where the capabilities between humans may differ wildly and any gesture or action you ask them to perform should be as natural and comfortable as possible.

    One example of this is a hands-free game that required holding your hand out to aim fireballs at your foe. A great game and lots of fun, but it was discovered when shown to conference attendees that after about 4 minutes their arm would be burning with the strain of playing. To get a sense of what this felt like, hold a bag of sugar at arm’s length for 4 minutes or so.

    It is inevitable that we’ll see a fair number of hands-free games that push the limit of human capability and others that teasingly dance on the edge of it. Ultimately, the player wants to enjoy the game more than they want an upper body workout, so aim for ease of use and comfort and you’ll win many fans in the hands-free gaming space.

    7. Creating a Hands-Free Game – A Walkthrough

    Reading the theory is all well and good, but I find the most enlightening way to engage with the material is when I see it in action. What better way to establish the credibility of this article than to show you a finished game inspired by the lessons preached here.


    Figure 5:Title screen from the game DODGE – a completely hands-free game experiment

    The basic goal when writing DODGE was to investigate whether a completely hands-free game could be created that required no training and was truly hands-free. By this definition, an application that once started from the OS would require no keyboard, mouse, or touch and was powered entirely by hands-free technology.

    Having established the Body Mass Tracker as my input method of choice, I began writing a simple game based on the necessity to dodge various objects being thrown in your general direction. However, due to the lack of an artist, I had to resort to more primitive techniques for content generation and created a simple rolling terrain that incrementally added stalactites and stalagmites as the game progressed.

    As it happened, the “cave” theme worked much better visually than any “objects being thrown at me” game I could have created in the same timeframe. So with my content in place, I proceeded to the Perceptual Computing stage.

    Borrowed from previous Perceptual Computing prototypes, I created a small module that plugged into the Dark Basic Professional programming language which would feed me the body mass tracker coordinate to my game. Within the space of an hour I was now able to control my dodging behavior without touching the keyboard or mouse.

    What I did not anticipate until it was coded and running was the nuance and subtlety you get from BMT (Body Mass Tracker), in that every slight turn of the head, lean of the body, twist in the shoulder would produce an ever so slight course correction by the pilot in the game. It was like having a thousand directions to describe north! It was this single realization that led me to conclude that Perceptual Gaming is not a replacement to peripheral controllers, but its successor. No controller in the world, no matter how clever, allows you to control game space using your whole body.

    Imagine you are Superman for the day, and what it might feel like to fly—to twist and turn, and duck and roll at the speed of thought. As I played my little prototype, this was what I glimpsed, a vision of the future where gaming was as fast as thought.

    Now to clarify, I certainly don’t expect you to accept these words at face value, as the revelation only came to me immediately after playing this experience for myself. What I ask is that if you find yourself with a few weekends to spare, try a few experiments in this area and see if you can bring the idea of “games as quick as thought” closer to reality.

    At the time of writing, the game DODGE is still in development, but will be made available through various distribution points and announced through my twitter and blog feeds.

    8. Tricks and Tips

    Do’s

    • Test your game thoroughly with non-gamers. They are the best test subjects for a hands-free game as they will approach the challenge from a completely humanistic perspective.
    • Keep your input methods simple and intuitive so that game input is predictable and reliable.
    • Provide basic camera information through your game such as whether the camera is connected and providing the required depth data. No amount of calibration in the world will help if the user has not plugged the camera in.

    Don’ts

    • Do not interpret data values coming from the camera as absolute values. Treat all data as relative to the initial calibration step so that each player in their own unique environment enjoys the same experience. If you developed and tested your game in a dark room with the subject very close to the camera, imagine your player in a bright room sitting far from the keyboard.
    • Do not assume your user knows how the calibration step is performed and supplement these early requests from the user with on-screen text, voice-over, or animation.
    • Never implement an input method that requires the user to have substantial training as this will frustrate your audience and even create opportunities for non-deterministic results.

    9. Final Thoughts

    You may have heard the expression “standing on the shoulders of giants,” and the idea that we use the hard won knowledge of the past to act as a foundation for our future innovations. The console world had over 20 years of trial and error before they mastered the hands-free controller for their audience, and as developers we must learn from their successes and failures. Simply offering a hands-free option is not enough, as we must guard against creating a solution that becomes the object of novelty ten years from now. We must create what the gamer wants, not what the technology can do, and when we achieve that we’ll have made a lasting contribution to the evolution of hands-free gaming.

    About The Author

    When not writing articles, Lee Bamber is the CEO of The Game Creators (http://www.thegamecreators.com), a British company that specializes in the development and distribution of game creation tools. Established in 1999, the company and surrounding community of game makers are responsible for many popular brands including Dark Basic, FPS Creator, and most recently App Game Kit (AGK).

    The application that inspired this article and the blog that tracked its seven week development can be found here: http://ultimatecoderchallenge.blogspot.co.uk/2013/02/lee-going-perceptual-part-one.html

    Lee also chronicles his daily life as a coder, complete with screen shots and the occasional video here: http://fpscreloaded.blogspot.co.uk

    Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
    Copyright © 2013 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

  • ultrabook
  • desktop
  • applications
  • Perceptual Computing
  • Gesture Recognition
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Intermediate
  • Perceptual Computing
  • Microsoft Windows* 8 Desktop
  • Sensors
  • User Experience and Design
  • Laptop
  • URL
  • Case Study: LinkedPIXEL Wins Intel® Perceptual Computing Challenge with Gesture-based Drawing Application

    $
    0
    0

    By Karen Marcus

    Download Article


    LinkedPIXEL Wins Intel® Perceptual Computing Challenge with Gesture-based Drawing Application [PDF 1.1MB]

    Matt Pilz, owner and sole developer at LinkedPIXEL, was one of the grand prize recipients in Phase 1 of the Intel® Perceptual Computing Challenge. His application, Magic Doodle Pad, enables users of all ages and abilities to create two-dimensional artwork without touching anything physical.

    Pilz taught himself perceptual computing and how to use the Intel Perceptual Computing software development kit (SDK) while creating his application for the Challenge. Although Magic Doodle Pad does include mouse input as a backup method, the primary input method for the application is hand gestures, which Pilz kept as simple as possible.

    Although he faced some challenges in design as well as a changing SDK, the process gave Pilz a solid understanding of the success factors needed when developing perceptual computing applications.

    The Product: Magic Doodle Pad


    Magic Doodle Pad (see Figure 1) was inspired by a previous application that Pilz developed, called Scribblify, which was a doodle application, as well. However, says Pilz, “Magic Doodle Pad was an entirely new application developed specifically for the Challenge and the perceptual camera.” After reviewing information about perceptual computing, he reasoned that such an artistic application would work well with camera gestures and that it would be enjoyable for users to just sit back and draw without having to think too much about it.

    During development, Pilz didn’t consider a possible target market for his application other than wanting it to be accessible for different age groups and demographics. Rather, he considered the features he wanted to include. He says, “I knew that this was going to be a casual application, not a full, high-end illustration tool set. I obviously had a limited timeline, and the features were reflective of that.”

    Pilz reveals that when he started the Challenge, he knew nothing about the Intel Perceptual Computing SDK but knew that it seemed to be a trend. He says, “With Microsoft and Kinect* and all of these different gesture-based devices coming out, I figured it was something to look in to and was excited to give it a shot.”


    Figure 1. Magic Doodle Pad title screen

    Development Using the Intel Perceptual Computing SDK


    To learn about the Intel Perceptual Computing SDK, Pilz used a number of resources. He says, “My first resource was the Intel SDK documentation; I read through it and looked at the examples and code snippets that Intel provided. One particular item I used was the Intel snippet on GitHub. There I found several good, basic demonstrations of how to interact with the various sensors of the camera. So I was able to look at those and analyze them. After that, I was able to make the connection and come up with some more advanced functionality.”

    In addition, Pilz used the Intel developer support forums. He says, “They have a perceptual computing subforum, and it has a lot of helpful people, not just Intel staff members but other members of the community who can share their thoughts and experiences using the SDK.”

    Pilz has good things to say about the evolution of the SDK: “I think Intel has done a phenomenal job of improving the SDK since I used it during the beta version. Back then, it was just a simple PDF with few pages of information. Now, with the latest SDK that I downloaded recently, the core documentation is over 700 pages long, so it’s much easier to quickly review different components of the SDK.”

    Deciding Which Elements to Include

    Time constraints limited how many different elements Pilz could include, and he made decisions based on functionality and ease of use. Voice recognition was one element Pilz considered. He thought it could be used to switch colors or brushes without users having to move their hands way over to the side of the screen. However, he says, “With the beta SDK, it was not easy to implement, and the documentation was still incomplete, so I scratched that idea.”

    Hand Gestures

    Pilz knew from the beginning that hand and finger gesture recognition would be key elements, because, he says, “I wanted to create a user experience that would emulate physical paintbrush techniques. Enabling users to just wave their hands and move their fingers around in front of a camera to create art was the number one priority” (see Figure 2).


    Figure 2. Magic Doodle Pad canvas

    For the hand gestures and hand recognition, Pilz’s objective was to make them as natural to users as possible. He explains: “I started mapping what gestures would correspond to what features of the app by picking up an actual brush and experimenting to see how the hand is held when users are drawing and what it should be like when they’re not drawing. I programmed in the gestures so that when users extend a finger, it acts like the paintbrush. While the finger is extended or if the hand is closed as if holding on to a paintbrush, it begins drawing. As soon as users have their hands wide open, it’s as if they dropped the brush, and they can navigate the menus without drawing. My goal was to create the most natural-feeling gestures possible.”

    Pilz conducted limited user testing, which was successful in seeing how different hand sizes and gesture styles were implemented by a variety of users.

    Magic Doodle Pad incorporates several specific gestures. Pilz describes them: “For the main menu navigation and to activate options, the main hand gesture is an open hand, so a palm facing the camera. When the app detects that the hand is open, cross-hairs appear on the screen to allow users to track where their hand is in relation to the screen. When they hover over any menu item, they see a little timer countdown. If they wait 1–2 seconds, then it activates that function. Another gesture is to enlarge or dismiss thumbnails when looking at previously created art, users flip open their hand or clap the hand back into itself; that brings the images out, and then shrinks them back” (see Figure 3).


    Open Palm (Facing Camera)

    • Move on-screen cross-hairs or cursor
    • Stop drawing to screen if in draw mode
    • Quickly snap fingers shut to close the enlarged
      preview image (when in Gallery view)
    • Hover over a menu item until the circle indicator
      graphic fills to execute the action


    Pointer Finger (Facing Camera)

    • Start drawing on the screen if in draw mode.
    • Alternatively, close all fingers as if holding a brush
      (this may be less accurate).


    Closed Palm (Facing Camera)

    • Alternate method of drawing on the screen when in
      draw mode.
    • Quickly flick fingers open to enlarge the preview
      image (when in Gallery view).

    Figure 3. Magic Doodle Pad gestures

    With the Intel Perceptual Computing SDK, Pilz notes, each hand could be assigned a separate object, with its own tracking information and data that could be pulled up easily. He compared one object to the next to get the information about each hand and, from there, reused most of the code with both hands. He says, “I believe the SDK classifies hands as primary and secondary, with affiliated labels. It was just a matter of creating two objects for each one, and then tracking the information for each.”

    Pilz also notes that some logic is built in to the SDK. He says, “I made it so users could track both hands at once on the screen, and then swipe up to two different brushes and two different colors for each hand. The SDK made that pretty straightforward. It’s just a matter of having two objects instead of one and doing conditional checks to determine which hand is performing what action.”

    For this application, no manual keyboard functionality is included. Pilz explains, “The entire interface was designed around the gesture concepts using the camera. So I did add basic mouse support to achieve the same objective, but the primary means of input is designed to be gestures.”

    The application continuously checks for hand gestures and tracks the user’s hand. If the hand moves out of camera range, the application checks to see if it has come back into view. When it is in view, gesture-based controls take precedence. However, says Pilz, “If the hand is still in view and the mouse moves, then the application hides the cross-hairs and assumes that the user is now using the mouse as the primary input method. This is the case until the gesture sensor detects another movement, and then it switches back to gesture mode. At any given time, either the mouse or the gesture controls are active. I designed it to not have any negative impact based on which input method is used.”

    Depth Sensor

    Pilz also experimented with the depth sensor of the camera. He says, “My idea with that was to make the brush size change depending on how close or far away from the camera the user’s hand was. It worked, but I knew a lot of different users with different setups and different distances from the camera would be using it. It wasn’t a technical issue; it was more that in that time frame, I just couldn’t seem to pinpoint the specific scale ratio to make the brush usable by the greatest number of users. So I decided not to keep that as an added feature.”

    Design Challenges and Achievements

    Pilz’s primary design challenge was with sensitivity levels in determining how open or closed a hand had to be to constitute the corresponding hand gesture. In early testing, at times users tried to draw, but the system would detect their hand as being open and wouldn’t let them. He observes, “How the user was situated or the size of his or her hand affected how the application interpreted how open or closed the hand was. I had to do some fine-tuning with the sensitivity levels to determine when the hand is open and when it’s closed.”

    Another concern was the fact that the depth and gesture sensors on the camera are 320 x 240 resolution. The application was 1024 x 768 or higher, so there was less pixel data to work with than the ideal. Pilz notes, “Once the user’s hand moves toward the edge of the screen, the application would sometimes have issues trying to pick up what was going on. I solved that by hiding the cross-hairs if the hand is outside of the sensor range, assuming that the user is now using the mouse instead of hand gestures.”

    Pilz also had to find a way to manage simultaneous gestures from both hands. He says, “The secondary hand was added during the final stages of the development process. At first, I just created all the functionality and made it work with one gesture. Really, the only time the application uses the simultaneous hand gestures is during drawing.”

    Another design challenge was trying to come up with an interface that uses gesture controls. Pilz comments, “This is an entirely new type of input because you can’t design interfaces as you would for a traditional mouse. To make menu options selectable only when the user really wants to select them versus accidentally as soon as they brush their hand over a certain item—that was one of the design considerations for which I struggled to come up with a good solution.”

    To resolve this issue, Pilz decided not to have a button toggle as soon as the user’s hand hovered over it. He says, “I realized that would be a terrible user experience decision because, for example, if they’re drawing and they move their hand over the exit button, suddenly it would exit the drawing. I was inspired by Nintendo Wii* and Kinect games, where they ask you to move your hand in to a specific location for a few seconds, just to verify that that’s where you really mean to go. In my mind, that was the best solution at the time: As soon as the user hovers over an item, a little circle appears that starts to fill in. If they don’t want to confirm that selection, they can move their hand away and the circle disappears. But if they stay there until the circle fills up, that option is selected.”

    In terms of achievements, the finger tracking function worked particularly well. Pilz notes, “With this app, I didn’t need to get so specific as to track each individual finger. The SDK has a property of the geo-node object called LABEL_HAND_FINGERTIP that simply finds whatever fingertip is the furthest out, and then it starts tracking that one. This made it easy to track an individual finger without having to concern myself with each individual fingertip.”

    Other Challenges

    Pilz had some additional challenges with the SDK itself. He observes, “When I started development, the SDK was only on its second beta, so at that point several elements were not yet available.” Another challenge was having the voice recognition component downloadable separately from the SDK. The Dragon software used for voice recognition was about 600 MB. Pilz says, “The third beta had a web-based downloader, so every time I wanted to install the SDK on a different computer for testing, I’d have to re-download the entire SDK instead of having an offline downloader.”

    Development Experience


    Perceptual computing development and using the Intel Perceptual Computing SDK was a learning experience for Pilz and one that resulted in new innovations.

    Lessons Learned

    The biggest lesson Pilz learned during the development process was always to check for SDK updates. “This relates to the inherent risks of developing using beta products,” he says. “With the SDK changing, I had to go in and revise my code to reflect the changes. That’s just the nature of beta software, and it’s important to try to find a change log to know exactly what’s changed from version to version.” Keeping an eye on these changes is important for developers to ensure that the application continues to work when a change happens that requires modifications.

    Another lesson was the importance of becoming familiar with all the resources, documentation, code samples, and the Intel support forum. Pilz notes, “Those are the sources I stuck with to answer all the questions I had during this development process.”

    Pilz found programming for perceptual computing easier to learn than he expected. He says, “Some developers may be reluctant to really embrace perceptual computing because of how new and unfamiliar that concept is. That’s the boat I was in when I first started; it seemed complex to me, because you’re dealing with hand gestures and these different input mechanisms. All I can say is, it’s not bad. In less than a week, I had a solid understanding of everything that I needed to do and even had a good chunk of my application done using these new perceptual concepts. By sticking to the SDK and reading the documentation, I would tell people not to be worried. It’s a pretty straightforward process once you get in. In fact, I found it easier to use the perceptual computing SDK than some other SDKs I’ve used in the past.”

    Pilz also encourages other developers to think abstractly when coming up with perceptual applications: “You might have a menu at the top that works well with the mouse, but you should ask yourself if there are better ways to handle that through just gestures. Try to devise new and more natural-feeling ways to achieve the same results. I’m not saying that perceptual computing will ever replace traditional keyboard input, but it definitely has its benefits over mouse input for different applications.”

    Innovations

    Pilz considers this application a success—if for no other reason than he developed it so it could be used without a traditional input device or even a touchscreen. He acknowledges that the user interface that enables users to hover over different menu items to confirm the selection isn’t necessarily innovative, but he says, “To me, that’s an innovative user experience approach, to allow them to navigate between all the menu items and use all of the features without ever having to use the mouse.”

    In addition, the gestures Pilz used for the image gallery were new, as users can just pop open the image they’re hovering over by flipping their finger outwards (see Figure 4).


    Figure 4. Magic Doodle Pad gallery with the Exit button at the top

    Using the Intel Perceptual Computing SDK

    Pilz appreciated the fact that the SDK included prebuilt libraries compatible with most of the mainstream frameworks, including Unity*, Processing, and openFrameworks*, as well as the raw C++ libraries. Pilz says, “By providing prebuilt libraries for these different programming environments, Intel made it that much easier to quickly get started with the SDK instead of having to reinvent the wheel by incorporating it by hand into some framework. It was easy to incorporate the SDK into Processing thanks to the work that was done to make it accessible through these different platforms.”

    With respect to using the Processing framework to develop Magic Doodle Pad, Pilz says, “I know a lot of entrants used Unity, but because I wasn’t making a game, I went with Processing, which is built on Java* technology, and that made it easy to handle image manipulation and so forth, which I used throughout the application. GitHub libraries available through Intel for the Processing framework made it easy for me to download and dive in without having to spend significant amounts of time reading what had already been done. After reviewing the documentation and some of the code snippets, it only took me about an hour to have my first perceptual prototype done.”

    “Another impressive feature,” adds Pilz, “is how simple it is to track hands and fingers. You can retrieve all that data basically in one line of code to track a hand, and then another line of code to track an individual finger. It allowed me to focus more on the heart of the application rather than trying to deal with the technicalities.”

    Pilz also benefitted from the extensive documentation that Intel provided for the SDK as well as code snippets. “Intel actually has several different repositories online where many different code snippets are available for all of these different platforms,” says Pilz. “I give Intel a lot of credit for that, because I’ve used other SDKs in the past, and it was much more cumbersome to get started, because the documentation was lacking.”

    Future Plans

    With the Challenge over, Pilz continues to improve Magic Doodle Pad. He notes that, in particular, voice recognition could make the application more user-friendly: “It’s a lot easier to say, ‘Color blue,’ or ‘Brush seven,’ or ‘Save drawing’ than it is to get your hand to the corner of the screen to access those different menu interfaces.”

    Another feature that could be incorporated is different gestures for each individual finger. Pilz explains, “For a painting app, it might be fun to have a different brush, a different color, or even 10 of the same brush. It would be more like finger paints at that point.” Pilz would still like to incorporate the depth sensor to make the brush scale depending on how close the user is to the camera.

    A three-dimensional (3D) component is another possibility. Pilz has experimented a bit with this mode, using the Oculus VR Rift*, a virtual reality headset. Adding this component to Magic Doodle Pad would enable users to essentially sculpt a 3D object. Pilz says, “I want to think that you could get much finer detail by being able to move your hands around and rotate the object in midair without having to use any controls. I believe that a higher level of detail could be achieved with that down the road. I’ve already seen 3D modeling applications that supported hand gestures and found them quite captivating.” He adds, “I imagine if you can get to the point where you can control everything by waving your hand in front of a sensor camera, suddenly you’re entirely immersed in whatever project you’re working on, so it is an exciting prospect.”

    Pilz sees possibilities for other creative applications. He says, “It’s an organic approach. Traditionally, you’ll have a paintbrush in your hand. And now, you can actually simply pick up a pencil or something and hold it in midair and draw while watching the screen. I think there’s a level of immersiveness with perceptual computing—to have that free experience of just waving your hands without having to touch any physical object and to be creating. We are entering what is to me like a science fiction film, where you see all these people who make all these hand gestures and achieve great things. It’s getting to that point, I think, with perceptual computing.”

    Having worked in the medical industry for a few years, Pilz sees applications for perceptual computing there as well: “I see this as having huge potential in the medical industry, where there might be physicians who are seeing patients and have gloves on and can’t really interact with a physical device for contamination reasons. But if a program with medical records comes along that supports gesture recognition, I can see them just being able to make hand gestures without having to interfere with anything else or take off gloves to pull up different medical records. To me, that’s one venue that I think has a lot of potential for perceptual cameras.”

    LinkedPIXEL


    For several years, Pilz was a web developer for a health facility. In early 2011, he formally established LinkedPIXEL as a business; he now focuses almost exclusively on app development using different platforms. He says, “I’ve done a lot of Apple iOS* and Google Android* development, and I’ve now done some that have been released through Intel AppUp®. I’ve even experimented with developing apps for niche markets like the Panasonic VIERA* Connect TV.”

    Pilz wants to stay on top of such innovations and “hopefully come up with the next big thing.” In particular, he plans to continue developing unique applications for current and next-generation platforms. He has also become interested in Intel’s recently unveiled HTML5 development center, which includes many tools and resources for developing web-based apps that are cross-platform compatible. Pilz may also contribute to one or more of several ongoing Intel competitions relating to perceptual computing and application innovation.

    To find out more about LinkedPIXEL go to linkedpixel.com.

    About the Author


    Karen Marcus, M.A., is an award-winning technology marketing writer who has 16 years of experience. She has developed case studies, brochures, white papers, data sheets, solution briefs, articles, web site copy, video scripts, and other documents for such companies as Intel, IBM, Samsung, HP, Amazon Web Services, Amazon Webstore, Microsoft, and EMC. Karen is familiar with a variety of current technologies, including cloud computing, IT outsourcing, enterprise computing, operating systems, application development, digital signage, and personal computing.

    Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
    Copyright © 2013 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

  • ULTRABOOK™
  • applications
  • Intel® Perceptual Computing Challenge.
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Beginner
  • Intel® Perceptual Computing SDK
  • Perceptual Computing
  • Development Tools
  • Sensors
  • User Experience and Design
  • Laptop
  • Desktop
  • URL
  • Windows* 8 上的超极本™ 桌面应用开发:支持触控和传感器的图片应用

    $
    0
    0

    下载源代码:

    PhotoApplication.zip

    介绍

    众所周知,超极本™ 设备通常具有时尚的美学设计、漂亮的外观和流畅的触控。然而,与大多数标准的笔记本电脑相比,超极本的真正优势就在于它们的硬件性能。除了具备笔记本电脑支持的标准功能外,超极本还具备了一些独有的特性,如触摸屏、GPS、加速计,并支持方向传感器、环境光线传感器、NFC 和其他传感器。当前,消费者在个人计算设备上拥有了更多的选择,比如台式机、笔记本电脑、平板电脑等。大多数的消费者在处理复杂的软件应用和进行数据存储时仍然将台式机或笔记本电脑作为首选。随着智能第三方应用和多任务处理的不断出现,平板电脑以其极高的便携性为我们提供了一个可以替代笔记本电脑的绝佳选择。然而,尽管平板电脑可以处理一些与工作或业务相关的任务,它们仍然无法完全代替笔记本电脑。

    而可变形的超极本由于既可以用作平板电脑又可以用作笔记本电脑,因此仅通过自身一台设备即可满足消费者的多重需求。它们不仅具备了笔记本电脑的功能,还具备了平板电脑的易用性和便利性。OEM 厂商在设计可变形超极本时往往能够独出心裁。例如,一些可变形设计可以支持一种可拆卸的键盘,因此在卸下键盘之后显示屏便可以用作独立的平板电脑。而其他的可变形设计则支持显示屏进行滑动或翻转操作,进而可以在平板电脑和笔记本电脑模式间进行切换。

    集成 Windows 8 的可变形超极本集两种设备的角色于一身,同时提供了强大的功能。借助集成 Windows 8 的英特尔硬件,用户既可以运行桌面应用,也可以运行 Windows 应用商店(以前称作 Metro-style)中的应用。微软的新 Win RT API 为开发人员提供了在 Windows 8 上创建 Windows 商店应用的工具。另外,一些 Win RT API 还可用于在 Windows 8 上开发桌面应用,这意味着开发人员可轻松将他们的传统 Windows 应用移植至 Windows 8 桌面应用。

    以下一系列文章详解了一个简单的超极本图片应用。该应用将展示开发人员如何使用超极本的独有特性(包括触摸屏、GPS、环境光线、方向和电源传感器)来创建出智能、动态的应用。这些代码段和源代码将帮助开发人员将他们的传统 Windows 应用移植到 Windows 8 上。本文同时还说明了如何通过 Windows 8 上的受控代码访问 Win 32 API 上的非受控代码。

    面向超极本的图片应用

    这是一个简单的 Windows 8 应用,用户可以使用它进行拍照、查看图片、为图片加注地理标记等。该应用支持通过触控和鼠标/键盘输入,为用户提供了极佳的平板电脑和笔记本电脑双模式体验。

    以下文章将通过图片应用中的代码段帮助您快速了解超极本设备应用开发的不同方面。第一部分说明了在开发能够支持流畅触控应用时在用户界面设计方面需要注意的事项。此外,你还能学到如何使用一些触摸手势,如“轻拂”、“滑动”、“捏拉”和“缩放”。

    为运行 Windows* 8 的超极本™ 增加对桌面应用的触控支持

    可实现极佳用户体验的用户界面指南

    这一部分文章中将通过代码段帮助你了解功耗感知和环境感知的实施细节,以及如何在你的应用上使用传感器。点击你感兴趣的文章了解更多有关信息:

    开发适合搭载 Windows* 8 的超极本TM 的低功耗桌面应用

    在搭载 Windows* 8 的超极本的桌面应用中启用加速计传感器

    在搭载 Windows* 8 的超极本TM 的桌面应用中启用方向传感器

    在搭载 Windows* 8 的超极本TM 的桌面应用中启用环境光线传感器 (ALS)

    本应用仅用于说明之目的,尽管如此,我们仍然提供了一个信息板为您展示各种传感器的数据,包括环境光线、方向、功耗等级、亮度等。

    相关文章

    通知

    本文件中包含关于英特尔产品的信息。本文件不构成对任何知识产权的授权,包括明示的、暗示的,也无论是基于禁止反言的原则或其他。英特尔不承担任何其他责任。英特尔在此作出免责声明:本文件不构成英特尔关于其产品的使用和/或销售的任何明示或暗示的保证,包括不就其产品的(i)对某一特定用途的适用性、(ii)适销性以及(iii)对任何专利、版权或其他知识产权的侵害的承担任何责任或作出任何担保。

    除非经过英特尔的书面同意认可,英特尔的产品无意被设计用于或被用于以下应用:即在这样的应用中可因英特尔产品的故障而导致人身伤亡。

    英特尔有权随时更改产品的规格和描述而毋需发出通知。设计者不应信赖任何英特产品所不具有的特性,设计者亦不应信赖任何标有保留权利摂或未定义摂说明或特性描述。对此,英特尔保留将来对其进行定义的权利,同时,英特尔不应为因其日后更改该等说明或特性描述而产生的冲突和不相容承担任何责任。此处的信息可随时更改,恕不另行通知。请勿根据本文件提供的信息完成一项产品设计。

    本文件所描述的产品可能包含使其与宣称的规格不符的设计缺陷或失误。这些缺陷或失误已收录于勘误表中,可索取获得。

    在发出订单之前,请联系当地的英特尔营业部或分销商以获取最新的产品规格。

    索取本文件中或英特尔的其他材料中提到、包含订单号的文件的复印件,可拨打 1-800-548-4725 1-800-548-4725, 或登陆:http://www.intel.com/design/literature.htm

    在性能检测过程中涉及的软件及其性能只有在英特尔微处理器的架构下方能得到优化。诸如SYSmark和MobileMark等测试均系基于特定计算机系统、硬件、软件、操作系统及功能。上述任何要素的变动都有可能导致测试结果的变化。请参考其他信息及性能测试(包括结合其他产品使用时的运行性能)以对目标产品进行全面评估。

    对本文件中包含的软件源代码的提供均依据相关软件许可而做出,任何对该等源代码的使用和复制均应按照相关软件许可的条款执行。

    英特尔、超极本和 Intel 标识是英特尔在美国和/或其他国家的商标。

    英特尔公司 2012 年版权所有。所有权利受到保护。

    *其他的名称和品牌可能是其他所有者的资产。

    附件

    下载 photoapplication.zip (19.26 MB)

  • 代码
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Intermediate
  • Microsoft Windows* 8 Desktop
  • Sensors
  • Touch Interfaces
  • Laptop
  • URL

  • 超极本™ 和平板电脑 Windows* 8 传感器开发指南

    $
    0
    0

    介绍


    本指南为开发人员提供了面向 Windows 8.1 桌面和 Windows 商店应用的 Microsoft Windows 8.1 传感器应用编程接口 (API) 的概述并重点介绍了 Windows 8.1 桌面模式中可用的传感器功能。本开发指南对可以创建交互应用的 API 进行了总结,包括采用Windows 8.1 的加速器、磁力计和陀螺仪等常见传感器。

    Windows 8.1 的编程选择


    开发人员在 Windows 8.1 上对传感器进行编程时具有多种 API 选择。这种支持触控的应用环境又称作“Windows 商店应用”。Windows 商店应用可以运行那些在开发时加入了 Windows 运行时 (WinRT) 界面的软件。WinRT 传感器 API 是整个 WinRT 库的一部分。更多细节,请参见 MSDN Sensor API library

    传统的 Win Forms 或 MFC 应用现在被称为“桌面应用”,因为它们运行于 Desktop Windows Manager 环境中。桌面应用可以使用本地的 Win32*/COM API、.NET 样式 API 或者部分的 WinRT 精选 API。 

    以下列出的 WinRT API 可以通过桌面应用访问:

    • Windows.Sensors(加速计、陀螺测试仪、环境光传感器、方向传感器...)
    • Windows.Networking.Proximity.ProximityDevice (NFC)
    • Windows.Device.Geolocation (GPS)
    • Windows.UI.Notifications.ToastNotification
    • Windows.Globalization
    • Windows.Security.Authentication.OnlineId(包括 LiveID 集成)
    • Windows.Security.CryptographicBuffer(有用的二进制编码/解码函数)
    • Windows.ApplicationModel.DataTransfer.Clipboard(访问和监控 Windows 8 剪贴板)

    在这两种情形中,这些 API 都通过一个名为 Windows Sensor Framework 的 Windows 中间件组件。Windows Sensor Framework 定义了传感器对象模型。不同的 API 以略有不同的方式“绑定”至相应的对象模型。

    关于桌面应用和 Windows 商店应用开发的不同之处将会在本文的稍后部分介绍。为了简单起见,我们将只考虑桌面应用开发。有关 Windows 商店应用开发的信息,请参见 API Reference for Windows Store apps.

    传感器


    传感器的类型很多,但是我们感兴趣的是 Windows 8.1 需要的传感器,即加速器、陀螺仪、环境光传感器、罗盘和 GPS。Windows 8.1 通过对象导向的抽象来表现物理传感器。为了操控传感器,程序员使用 API 与对象进行交互。以下表格中的信息说明了如何通过 Windows 8 桌面应用和 Windows 商店应用访问各种传感器。

    Windows 8.1 桌面模式应用

    Windows 商店应用

    功能/工具集

    C++

    C#/VB

    JavaScript*/ HTML5

    C++, C#, VB & XAML
    JavaScript/HTML5

    Unity* 4.2

    方向传感器(加速计、测斜仪、陀螺测试仪

     

     

     

     

    光线传感器

    NFC

    GPS

    表 1.   Windows* 8.1 开发者环境功能矩阵

    下面的图形 1 表明对象的数量超过了实际硬件的数量。Windows 通过利用多个物理传感器的信息,定义了某些“逻辑传感器”对象。这称之为“传感器融合”。

    图 1.  Windows* 8 支持的不同类型的传感器

    传感器融合

    物理传感器芯片本身具有一些内在的局限性。例如:

    • 加速器测量线性加速,它是对联合的相对运动和地球重力进行测量。如果您想要了解电脑的倾角,则您必须进行一些数学运算。
    • 磁力计用于测量磁场的强度,会显示地球磁北极的位置。

    这些测量都受固有偏移问题制约,可以使用陀螺仪中的原始数据进行校正。这两项测量(扩展)取决于电脑与地球水平面的倾斜度。例如,为了确保电脑的磁向与地球真正的北极一致(磁北极处于不同的位置,会随着时间的改变而移动),你需要对其进行校正。

    传感器融合(图 2)通过获取多个物理传感器(尤其是加速器、陀螺仪和磁力计)的原始数据并执行数学运算对传感器自身的局限性进行校正,计算更适合人类使用的数据以及将这些数据以逻辑传感器抽象形式显示出来。应用开发人员必须实施必要的转换,以将物理传感器数据转换为抽象传感器数据。如果您的系统设计具有一个SensorHub,融合操作将发生在微控制器固件内。如果您的系统设计中没有 SensorHub,则融合操作必须在 IHV 和/或 OEM 提供的一个或多个设备驱动程序中完成。

    图 2.  过组合来自多个传感器的输出进行传感器融合

    识别传感器

    为了控制一个传感器,系统需要对其进行识别并关联。Windows Sensor Framework 定义了划分传感器的若干类别。它还定义了若干特定的传感器类型。表 2 列出了一些适用于桌面应用的传感器类型。

    “全部”

    生物识别

    环境

    位置

    机械

    运动

    方向

    扫描仪

    人体存在

    电容

    大气压

    环境光

    广播

    布尔开关

    1D 加速计

    1D 罗盘

    条形码

    人体接近*

    电流

    湿度

    Gps

    布尔开关阵列

    2D 加速计

    2D 罗盘

    Rfid

    触控

    电源

    温度

    静态

    机械力

    3D 加速计

    3D 罗盘

    电感

    风向

    多值开关

    1D 陀螺仪

    设备方向

    电位计

    风速

    压力

    2D 陀螺仪

    1D 距离

    电阻

    应力

    3D 陀螺仪

    2D 距离

    电压

    重量

    运动检测器

    3D 距离

    速度计

    1D 测斜仪

    2D 测斜仪

    3D 测斜仪

    表 2.传感器类型和类别

    Windows 必需的传感器类型以粗体显示:

    • 加速器、陀螺仪、罗盘和环境光是必需的“真正/物理”传感器
    • 设备方向和测斜仪是必需的“虚拟/融合”传感器(注:指南针还包括融合增强/倾斜补偿数据)
    • 如果存在一个 WWAN 广播,则 GPS 是必须的;否则 GPS 为可选
    • “人体接近”是必需列表中的常见选项,但是现在并不是必需的。

    所有这些常量对应于全局唯一标识符(GUID)。以下表 3 中举例说明了一些传感器的类别和类型、对应于 Win32/COM 和 .NET 的常量名称以及它们基本的 GUID 值。

    标识符

    常量 (Win32*/COM)

    常量 (.NET)

    GUID

    “全部”类别

    SENSOR_CATEGORY_ALL

    SensorCategories.SensorCategoryAll

    {C317C286-C468-4288-9975-D4C4587C442C}

    生物识别类别

    SENSOR_CATEGORY_BIOMETRIC

    SensorCategories.SensorCategoryBiometric

    {CA19690F-A2C7-477D-A99E-99EC6E2B5648}

    电类别

    SENSOR_CATEGORY_ELECTRICAL

    SensorCategories.SensorCategoryElectrical

    {FB73FCD8-FC4A-483C-AC58-27B691C6BEFF}

    环境类别

    SENSOR_CATEGORY_ENVIRONMENTAL

    SensorCategories.SensorCategoryEnvironmental

    {323439AA-7F66-492B-BA0C-73E9AA0A65D5}

    光类别

    SENSOR_CATEGORY_LIGHT

    SensorCategories.SensorCategoryLight

    {17A665C0-9063-4216-B202-5C7A255E18CE}

    位置类别

    SENSOR_CATEGORY_LOCATION

    SensorCategories.SensorCategoryLocation

    {BFA794E4-F964-4FDB-90F6-51056BFE4B44}

    机械类别

    SENSOR_CATEGORY_MECHANICAL

    SensorCategories.SensorCategoryMechanical

    {8D131D68-8EF7-4656-80B5-CCCBD93791C5}

    运动类别

    SENSOR_CATEGORY_MOTION

    SensorCategories.SensorCategoryMotion

    {CD09DAF1-3B2E-4C3D-B598-B5E5FF93FD46}

    方向类别

    SENSOR_CATEGORY_ORIENTATION

    SensorCategories.SensorCategoryOrientation

    {9E6C04B6-96FE-4954-B726-68682A473F69}

    扫描仪类别

    SENSOR_CATEGORY_SCANNER

    SensorCategories.SensorCategoryScanner

    {B000E77E-F5B5-420F-815D-0270ª726F270}

    人体接近类型

    SENSOR_TYPE_HUMAN_PROXIMITY

    SensorTypes.SensorTypeHumanProximity

    {5220DAE9-3179-4430-9F90-06266D2A34DE}

    环境光类型

    SENSOR_TYPE_AMBIENT_LIGHT

    SensorTypes.SensorTypeAmbientLight

    {97F115C8-599A-4153-8894-D2D12899918A}

    Gps 类型

    SENSOR_TYPE_LOCATION_GPS

    SensorTypes.SensorTypeLocationGps

    {ED4CA589-327A-4FF9-A560-91DA4B48275E}

    3D 加速计类型

    SENSOR_TYPE_ACCELEROMETER_3D

    SensorTypes.SensorTypeAccelerometer3D

    {C2FB0F5F-E2D2-4C78-BCD0-352A9582819D}

    3D 陀螺仪

    SENSOR_TYPE_GYROMETER_3D

    SensorTypes.SensorTypeGyrometer3D

    {09485F5A-759E-42C2-BD4B-A349B75C8643}

    3D 罗盘类型

    SENSOR_TYPE_COMPASS_3D

    SensorTypes.SensorTypeCompass3D

    {76B5CE0D-17DD-414D-93A1-E127F40BDF6E}

    设备方向类型

    SENSOR_TYPE_DEVICE_ORIENTATION

    SensorTypes.SensorTypeDeviceOrientation

    {CDB5D8F7-3CFD-41C8-8542-CCE622CF5D6E}

    3D 测斜仪类型

    SENSOR_TYPE_INCLINOMETER_3D

    SensorTypes.SensorTypeInclinometer3D

    {B84919FB-EA85-4976-8444-6F6F5C6D31DB}

    表 3.常量和全局唯一标识符示例 (GUID)

    以上列表中列出了最常使用的 GUID;其中许多现已可用。最初,您可能认为 GUID 无聊而且单调乏味,但是使用它们的一个最大原因就是:可扩展性。因为 API 不关注实际的传感器名称(它们仅传输 GUID),所以厂商可以为“增值”传感器创建新 GUID。

    生成新的 GUID

    微软在 Visual Studio* 中提供了一个用于生成新 GUID 的工具。图 3 显示了 Visual Studio 关于此操作的截图。所有厂商必须要做的就是发布它们,这样无需更改 Microsoft API 或任意操作系统代码即可看到新功能了。

    图 3.为增值传感器定义新 GUID

    使用传感器管理器对象


    为了使一个应用可以使用传感器,Microsoft Sensor Framework 需要通过一种方式将对象“绑定”到真实的硬件上。它采取了“即插即用”的方式,使用的是一种称为Sensor Manager Object(传感器管理器对象)的特殊工具。

    通过类型询问

    一款应用可以寻找特定类型的传感器,如 Gyrometer3D。传感器管理器询问电脑上显示的传感器硬件列表,然后返回绑定至该硬件的匹配对象的集合。虽然传感器集合可能有 0 个、1 个或多个对象,但通常只有 1 个。以下的 C++ 示例代码显示了使用传感器管理器对象的 GetSensorsByType 方法搜索 3 轴陀螺仪,并在传感器集合中返回搜索结果。请注意必须首先为传感器管理器对象创建一个 ::CoCreateInstance()。

    // Additional includes for sensors
    #include <InitGuid.h>
    #include <SensorsApi.h>
    #include <Sensors.h>
    // Create a COM interface to the SensorManager object.
    ISensorManager* pSensorManager = NULL;
    HRESULT hr = ::CoCreateInstance(CLSID_SensorManager, NULL, CLSCTX_INPROC_SERVER, 
        IID_PPV_ARGS(&pSensorManager));
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to CoCreateInstance() the SensorManager."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // Get a collection of all 3-axis Gyros on the computer.
    ISensorCollection* pSensorCollection = NULL;
    hr = pSensorManager->GetSensorsByType(SENSOR_TYPE_GYROMETER_3D, &pSensorCollection);
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to find any Gyros on the computer."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
     

    通过类别询问

    一款应用可以通过类别寻找传感器,比如运动传感器。传感器管理器询问电脑上显示的传感器硬件列表,然后返回绑定至该硬件的运动对象的集合。SensorCollection 中可能有 0 个、1 个或多个对象。在大多数电脑上,集合都具有 2 个运动对象。Accelerometer3D 和 Gyrometer3D。

    以下的 C++ 示例代码显示了使用传感器管理器对象的 GetSensorsByCategory方法搜索运动传感器,并在传感器集合中返回搜索结果。

    // Additional includes for sensors
    #include <InitGuid.h>
    #include <SensorsApi.h>
    #include <Sensors.h>
    // Create a COM interface to the SensorManager object.
    ISensorManager* pSensorManager = NULL;
    HRESULT hr = ::CoCreateInstance(CLSID_SensorManager, NULL, CLSCTX_INPROC_SERVER, 
        IID_PPV_ARGS(&pSensorManager));
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to CoCreateInstance() the SensorManager."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // Get a collection of all 3-axis Gyros on the computer.
    ISensorCollection* pSensorCollection = NULL;
    hr = pSensorManager->GetSensorsByCategory(SENSOR_CATEGORY_MOTION, &pSensorCollection);
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to find any Motion sensors on the computer."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
     

    通过“全部”类别询问

    实际上,一款应用如果能在电脑上同时寻找所有的传感器将会实现最高的效率。传感器管理器询问电脑上显示的传感器硬件列表,然后返回绑定至该硬件的所有对象的集合。传感器集合中可能有 0 个、1 个或多个对象。在大多数电脑上,集合都具有 7 个或以上的对象。

    由于 C++ 不能进行 GetAllSensors调用 ,因此您必须使用 GetSensorsByCategory(SENSOR_CATEGORY_ALL, …),如以下示例代码所示。

    C++ does not have a GetAllSensors call, so you must use GetSensorsByCategory(SENSOR_CATEGORY_ALL, …) instead as shown in the sample code below.
    // Additional includes for sensors
    #include <InitGuid.h>
    #include <SensorsApi.h>
    #include <Sensors.h>
    // Create a COM interface to the SensorManager object.
    ISensorManager* pSensorManager = NULL;
    HRESULT hr = ::CoCreateInstance(CLSID_SensorManager, NULL, CLSCTX_INPROC_SERVER, 
        IID_PPV_ARGS(&pSensorManager));
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to CoCreateInstance() the SensorManager."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // Get a collection of all 3sensors on the computer.
    ISensorCollection* pSensorCollection = NULL;
    hr = pSensorManager->GetSensorsByCategory(SENSOR_CATEGORY_ALL, &pSensorCollection);
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to find any sensors on the computer."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
     

    传感器生命周期 – 进入 (Enter) 和离开 (Leave) 事件

    在 Windows 上,与大多数硬件设备一样,传感器被视为即插即用设备。在以下几种不通过的情况中传感器可被连接/断开:

    1. 可能系统外部有基于 USB 的传感器,并将其插入 USB 端口。
    2. 在连接和断开时,可能通过不可靠的无线接口(如蓝牙*)或有线接口(如以太网)连接了传感器。
    3. 如果 Windows Update 升级传感器的设备驱动程序,它们将显示为断开连接,然后再重新连接。
    4. Windows 关闭(S4 或 S5)时,传感器显示为断开连接。

    在传感器操作中,即插即用称之为“进入” (Enter)事件,断开称之为“离开” (Leave) 事件。需要有灵活的应用来处理这两种事件。

    “进入”事件回调

    如果在插入传感器时应用处于运行状态,传感器管理器将报告传感器“进入”事件;而如果传感器在应用开始运行时已经提前插好,此时就不会出现“进入”事件;而如果传感器在应用开始运行时已经提前插好,此时就不会出现“进入”事件。在 C++/COM 中,您必须使用 SetEventSink方法钩起回调。该回调必须针对从 ISensorManagerEvents继承的整类函数进行,同时必须实施 IUnknown此外,ISensorManagerEvents接口必须执行回调函数:

    	STDMETHODIMP OnSensorEnter(ISensor *pSensor, SensorState state);
    // Hook the SensorManager for any SensorEnter events.
    pSensorManagerEventClass = new SensorManagerEventSink();  // create C++ class instance
    // get the ISensorManagerEvents COM interface pointer
    HRESULT hr = pSensorManagerEventClass->QueryInterface(IID_PPV_ARGS(&pSensorManagerEvents)); 
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Cannot query ISensorManagerEvents interface for our callback class."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // hook COM interface of our class to SensorManager eventer
    hr = pSensorManager->SetEventSink(pSensorManagerEvents); 
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Cannot SetEventSink on SensorManager to our callback class."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    
    

    代码:针对进入事件钩起回调

    以下是等同于“进入”回调的 C++/COM。所有来自主循环的初始化步骤将在本函数中执行。事实上,重构您的代码更为有效,这样您的主循环只需调用 OnSensorEnter,就能模拟“进入”事件。

    STDMETHODIMP SensorManagerEventSink::OnSensorEnter(ISensor *pSensor, SensorState state)
    {
        // Examine the SupportsDataField for SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX.
        VARIANT_BOOL bSupported = VARIANT_FALSE;
        HRESULT hr = pSensor->SupportsDataField(SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX, &bSupported);
        if (FAILED(hr))
        {
            ::MessageBox(NULL, _T("Cannot check SupportsDataField for SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX."), 
                _T("Sensor C++ Sample"), MB_OK | MB_ICONINFORMATION);
            return hr;
        }
        if (bSupported == VARIANT_FALSE)
        {
            // This is not the sensor we want.
            return -1;
        }
        ISensor *pAls = pSensor;  // It looks like an ALS, memorize it. 
        ::MessageBox(NULL, _T("Ambient Light Sensor has entered."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONINFORMATION);
        .
        .
        .
        return hr;
    }
    
    

    代码:进入事件回调

    “离开”事件

    单个传感器(并非传感器管理器)报告何时发生“离开”事件。本代码与之前的“进入”事件钩起回调相同。

    // Hook the Sensor for any DataUpdated, Leave, or StateChanged events.
    SensorEventSink* pSensorEventClass = new SensorEventSink();  // create C++ class instance
    ISensorEvents* pSensorEvents = NULL;
    // get the ISensorEvents COM interface pointer
    HRESULT hr = pSensorEventClass->QueryInterface(IID_PPV_ARGS(&pSensorEvents)); 
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Cannot query ISensorEvents interface for our callback class."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    hr = pSensor->SetEventSink(pSensorEvents); // hook COM interface of our class to Sensor eventer
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Cannot SetEventSink on the Sensor to our callback class."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    
    

    代码: 离开事件钩起回调 

    OnLeave 事件处理程序接收离开传感器的ID 并将其作为一个参数。

    STDMETHODIMP SensorEventSink::OnLeave(REFSENSOR_ID sensorID)
    {
        HRESULT hr = S_OK;
        ::MessageBox(NULL, _T("Ambient Light Sensor has left."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONINFORMATION);
        // Perform any housekeeping tasks for the sensor that is leaving.
        // For example, if you have maintained a reference to the sensor,
        // release it now and set the pointer to NULL.
        return hr;
    }
    
    

    代码:离开事件回调

    为一款应用选择传感器


    不同类型的传感器报告不同的信息。微软将这些信息称之为数据域 (Data Field),它们集合在一个 SensorDataReport 中。一台电脑可能需要配备一个以上的传感器供一款应用使用。这款应用在信息可用的情况下并不会关注信息的来源(即来自于哪个传感器)。

    表 4 显示了 Win32/COM 和 .NET 最常用数据域的常量名称。与传感器标识符一样,这些常量仅代表它们对应的 GUID 的人类可读的名称。这种对应方法为那些微软预定义的“已知”数据域之外的数据域提供了扩展性。

    常量 (Win32*/COM)

    常量 (.NET)

    属性键 (GUID,PID)

    SENSOR_DATA_TYPE_TIMESTAMP

    SensorDataTypeTimestamp

    {DB5E0CF2-CF1F-4C18-B46C-D86011D62150},2

    SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX

    SensorDataTypeLightLevelLux

    {E4C77CE2-DCB7-46E9-8439-4FEC548833A6},2

    SENSOR_DATA_TYPE_ACCELERATION_X_G

    SensorDataTypeAccelerationXG

    {3F8A69A2-07C5-4E48-A965-CD797AAB56D5},2

    SENSOR_DATA_TYPE_ACCELERATION_Y_G

    SensorDataTypeAccelerationYG

    {3F8A69A2-07C5-4E48-A965-CD797AAB56D5},3

    SENSOR_DATA_TYPE_ACCELERATION_Z_G

    SensorDataTypeAccelerationZG

    {3F8A69A2-07C5-4E48-A965-CD797AAB56D5},4

    SENSOR_DATA_TYPE_ANGULAR_VELOCITY_X_DEGRE
    ES_PER_SECOND

    SensorDataTypeAngularVelocityXDegreesPerSecond

    {3F8A69A2-07C5-4E48-A965-CD797AAB56D5},10

    SENSOR_DATA_TYPE_ANGULAR_VELOCITY_Y_DEGRE
    ES_PER_SECOND

    SensorDataTypeAngularVelocityYDegreesPerSecond

    {3F8A69A2-07C5-4E48-A965-CD797AAB56D5},11

    SENSOR_DATA_TYPE_ANGULAR_VELOCITY_Z_DEGRE
    ES_PER_SECOND

    SensorDataTypeAngularVelocityZDegreesPerSecond

    {3F8A69A2-07C5-4E48-A965-CD797AAB56D5},12

    SENSOR_DATA_TYPE_TILT_X_DEGREES

    SensorDataTypeTiltXDegrees

    {1637D8A2-4248-4275-865D-558DE84AEDFD},2

    SENSOR_DATA_TYPE_TILT_Y_DEGREES

    SensorDataTypeTiltYDegrees

    {1637D8A2-4248-4275-865D-558DE84AEDFD},3

    SENSOR_DATA_TYPE_TILT_Z_DEGREES

    SensorDataTypeTiltZDegrees

    {1637D8A2-4248-4275-865D-558DE84AEDFD},4

    SENSOR_DATA_TYPE_MAGNETIC_HEADING_COMPEN
    SATED_MAGNETIC_NORTH_DEGREES

    SensorDataTypeMagneticHeadingCompen
    satedTrueNorthDegrees

    {1637D8A2-4248-4275-865D-558DE84AEDFD},11

    SENSOR_DATA_TYPE_MAGNETIC_FIELD_STRENGTH_
    X_MILLIGAUSS

    SensorDataTypeMagneticFieldStrengthXMilligauss

    {1637D8A2-4248-4275-865D-558DE84AEDFD},19

    SENSOR_DATA_TYPE_MAGNETIC_FIELD_STRENGTH_
    Y_MILLIGAUSS

    SensorDataTypeMagneticFieldStrengthYMilligauss

    {1637D8A2-4248-4275-865D-558DE84AEDFD},20

    SENSOR_DATA_TYPE_MAGNETIC_FIELD_STRENGTH_
    Z_MILLIGAUSS

    SensorDataTypeMagneticFieldStrengthZMilligauss

    {1637D8A2-4248-4275-865D-558DE84AEDFD},21

    SENSOR_DATA_TYPE_QUATERNION

    SensorDataTypeQuaternion

    {1637D8A2-4248-4275-865D-558DE84AEDFD},17

    SENSOR_DATA_TYPE_ROTATION_MATRIX

    SensorDataTypeRotationMatrix

    {1637D8A2-4248-4275-865D-558DE84AEDFD},16

    SENSOR_DATA_TYPE_LATITUDE_DEGREES

    SensorDataTypeLatitudeDegrees

    {055C74D8-CA6F-47D6-95C6-1ED3637A0FF4},2

    SENSOR_DATA_TYPE_LONGITUDE_DEGREES

    SensorDataTypeLongitudeDegrees

    {055C74D8-CA6F-47D6-95C6-1ED3637A0FF4},3

    SENSOR_DATA_TYPE_ALTITUDE_ELLIPSOID_METERS

    SensorDataTypeAltitudeEllipsoidMeters

    {055C74D8-CA6F-47D6-95C6-1ED3637A0FF4},5

    表 4.数据域标识符常量   

    使得数据域标识符与传感器 ID 不同的原因是使用了名为属性键的数据类型。一个属性键包括一个 GUID (类似于传感器的 GUID) 以及一个名为“PID”的额外编号(属性 ID)。您可能会注意到属性键的 GUID 部分对于同一类别的传感器是通用的。数据域的所有值都具有本机数据类型,例如Boolean、unsigned char、int、float、double 等。

    在 Win32/COM 中,数据域的值存储在名为 PROPVARIANT 的多态数据类型中。在 .NET 中,有一个名为“对象”(object) 的 CLR(通用语言运行时)数据类型执行相同的操作。而多态数据类型则需要被查询和/或转换为“预期”或“记录”数据类型。

    使用传感器的 SupportsDataField()方法检查传感器,获取感兴趣的数据域。这是选择传感器时最常使用的编程术语。根据应用的使用模型不同,可能仅需要使用部分的数据域。应当选择那些可以支持预期数据域的传感器。类型转换时要求对来自基类传感器的子类成员变量进行分配。

    ISensor* m_pAls;
    ISensor* m_pAccel;
    ISensor* m_pTilt;
    // Cycle through the collection looking for sensors we care about.
    ULONG ulCount = 0;
    HRESULT hr = pSensorCollection->GetCount(&ulCount);
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to get count of sensors on the computer."), _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    for (int i = 0; i < (int)ulCount; i++)
    {
        hr = pSensorCollection->GetAt(i, &pSensor);
        if (SUCCEEDED(hr))
        {
            VARIANT_BOOL bSupported = VARIANT_FALSE;
            hr = pSensor->SupportsDataField(SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX, &bSupported);
            if (SUCCEEDED(hr) && (bSupported == VARIANT_TRUE)) m_pAls = pSensor;
            hr = pSensor->SupportsDataField(SENSOR_DATA_TYPE_ACCELERATION_Z_G, &bSupported);
            if (SUCCEEDED(hr) && (bSupported == VARIANT_TRUE)) m_pAccel = pSensor;
            hr = pSensor->SupportsDataField(SENSOR_DATA_TYPE_TILT_Z_DEGREES, &bSupported);
            if (SUCCEEDED(hr) && (bSupported == VARIANT_TRUE)) m_pTilt = pSensor;
            .
            .
            .
        }
    }
    
    

    代码:使用传感器中的 SupportsDataField() 方法查看支持的数据域

    传感器属性

    除了数据域,传感器还具有可用于辨识和配置的属性。表 5 显示了最常用的属性。与数据域类似,属性也有 Win32/COM 和 .NET 使用的常量名称,而且这些常量确实是下面的属性键数字。属性可通过厂商扩展,还具有 PROPVARIANT 多态数据类型。与只读的数据域不同,属性可以读/写。它取决于单个传感器是否拒绝写入尝试。由于在写入尝试失败时不会引发异常,因此需要执行读写验证。 

    标识 (Win32*/COM)

    标识 (.NET)

    属性键 (GUID,PID)

     

    SENSOR_PROPERTY_PERSISTENT_UNIQUE_ID

    SensorID

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},5

     

    WPD_FUNCTIONAL_OBJECT_CATEGORY

    CategoryID

    {8F052D93-ABCA-4FC5-A5AC-B01DF4DBE598},2

     

    SENSOR_PROPERTY_TYPE

    TypeID

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},2

     

    SENSOR_PROPERTY_STATE

    State

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},3

     

    SENSOR_PROPERTY_MANUFACTURER

    SensorManufacturer

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},6

     

    SENSOR_PROPERTY_MODEL

    SensorModel

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},7

     

    SENSOR_PROPERTY_SERIAL_NUMBER

    SensorSerialNumber

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},8

     

    SENSOR_PROPERTY_FRIENDLY_NAME

    FriendlyName

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},9

     

    SENSOR_PROPERTY_DESCRIPTION

    SensorDescription

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},10

     

    SENSOR_PROPERTY_MIN_REPORT_INTERVAL

    MinReportInterval

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},12

     

    SENSOR_PROPERTY_CONNECTION_TYPE

    SensorConnectionType

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},11

     

    SENSOR_PROPERTY_DEVICE_ID

    SensorDevicePath

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},15

     

    SENSOR_PROPERTY_RANGE_MAXIMUM

    SensorRangeMaximum

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},21

     

    SENSOR_PROPERTY_RANGE_MINIMUM

    SensorRangeMinimum

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},20

     

    SENSOR_PROPERTY_ACCURACY

    SensorAccuracy

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},17

     

    SENSOR_PROPERTY_RESOLUTION

    SensorResolution

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},18

     

    配置 (Win32/COM)

    配置 (.NET)

    属性键 (GUID,PID)

    SENSOR_PROPERTY_CURRENT_REPORT_INTERVAL

    ReportInterval

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},13

    SENSOR_PROPERTY_CHANGE_SENSITIVITY

    ChangeSensitivity

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},14

    SENSOR_PROPERTY_REPORTING_STATE

    ReportingState

    {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},27

    表 5.常用的传感器属性和 PID

    设置传感器敏感度

    敏感度设置是非常有用的传感器属性。可用于分配控制或过滤发送至主机计算机的 SensorDataReports 数量的阈值。流量可通过这种方式得以降低:仅发出那些真正会干扰主机 CPU 的 DataUpdated 事件。微软已经将敏感度属性的数据类型定义为一个容器类型,该容器类型在 Win32/COM 中被称为 IPortableDeviceValues,而在 .NET 中则被称为 SensorPortableDeviceValues。容器中包含一个元组集合,其中每个都是一个数据域属性键,随后是该数据域的敏感度值。敏感度通常使用与匹配数据相同的测量单位和数据类型。

    // Configure sensitivity
    // create an IPortableDeviceValues container for holding the <Data Field, Sensitivity> tuples.
    IPortableDeviceValues* pInSensitivityValues;
    hr = ::CoCreateInstance(CLSID_PortableDeviceValues, NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&pInSensitivityValues));
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to CoCreateInstance() a PortableDeviceValues collection."), _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // fill in IPortableDeviceValues container contents here: 0.1 G sensitivity in each of X, Y, and Z axes.
    PROPVARIANT pv;
    PropVariantInit(&pv);
    pv.vt = VT_R8; // COM type for (double)
    pv.dblVal = (double)0.1;
    pInSensitivityValues->SetValue(SENSOR_DATA_TYPE_ACCELERATION_X_G, &pv);
    pInSensitivityValues->SetValue(SENSOR_DATA_TYPE_ACCELERATION_Y_G, &pv);
    pInSensitivityValues->SetValue(SENSOR_DATA_TYPE_ACCELERATION_Z_G, &pv);
    // create an IPortableDeviceValues container for holding the <SENSOR_PROPERTY_CHANGE_SENSITIVITY, pInSensitivityValues> tuple.
    IPortableDeviceValues* pInValues;
    hr = ::CoCreateInstance(CLSID_PortableDeviceValues, NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&pInValues));
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to CoCreateInstance() a PortableDeviceValues collection."), _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // fill it in
    pInValues->SetIPortableDeviceValuesValue(SENSOR_PROPERTY_CHANGE_SENSITIVITY, pInSensitivityValues);
    // now actually set the sensitivity
    IPortableDeviceValues* pOutValues;
    hr = pAls->SetProperties(pInValues, &pOutValues);
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to SetProperties() for Sensitivity."), _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // check to see if any of the setting requests failed
    DWORD dwCount = 0;
    hr = pOutValues->GetCount(&dwCount);
    if (FAILED(hr) || (dwCount > 0))
    {
        ::MessageBox(NULL, _T("Failed to set one-or-more Sensitivity values."), _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    PropVariantClear(&pv);
    
    

    申请传感器权限

    一些有传感器提供的信息可能被认为是敏感信息,例如个人可识别信息 (PII)。计算机的位置等数据域(如纬度和经度)可以用于追踪用户。为此,Windows 强制应用获取最终用户权限,以访问传感器。根据需要,可以使用传感器的 State 属性以及 SensorManager 的 RequestPermissions()方法。

    RequestPermissions()方法将一组传感器作为一个参数,所以一款应用可以一次申请多个传感器的权限。C++/COM 代码显示如下。注:(ISensorCollection *) 必须作为一个参数提供给 RequestPermissions()。

    // Get the sensor's state
    
    SensorState state = SENSOR_STATE_ERROR;
    HRESULT hr = pSensor->GetState(&state);
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to get sensor state."), _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // Check for access permissions, request permission if necessary.
    if (state == SENSOR_STATE_ACCESS_DENIED)
    {
        // Make a SensorCollection with only the sensors we want to get permission to access.
        ISensorCollection *pSensorCollection = NULL;
        hr = ::CoCreateInstance(CLSID_SensorCollection, NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&pSensorCollection));
        if (FAILED(hr))
        {
            ::MessageBox(NULL, _T("Unable to CoCreateInstance() a SensorCollection."), 
                _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
            return -1;
        }
        pSensorCollection->Clear();
        pSensorCollection->Add(pAls); // add 1 or more sensors to request permission for...
        // Have the SensorManager prompt the end-user for permission.
        hr = m_pSensorManager->RequestPermissions(NULL, pSensorCollection, TRUE);
        if (FAILED(hr))
        {
            ::MessageBox(NULL, _T("No permission to access sensors that we care about."), 
                _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
            return -1;
        }
    }
    
    
     

    传感器数据更新

    传感器通过发出名为 DataUpdated 的事件来报告数据。实际的数据域在 SensorDataReport 内打包,被传输至所有附带的 DataUpdated 事件处理程序中。一款应用通过钩起一个回调处理程序至传感器的 DataUpdated 事件获取 SensorDataReport。事件发生在 Windows Sensor Framework 线程,该线程与用于更新应用 GUI 的消息泵线程不同。因此,需要将 SensorDataReport 从事件处理程序 (Als_DataUpdate) 传至可以在 GUI 线程环境中执行的单独的处理程序 (Als_UpdateGUI)。在 .NET 中,此类处理程序称之为委托函数。

    以下示例显示了委托函数的实现。在 C++/COM 中,必须使用 SetEventSink 方法钩起回调。回调不仅仅是一个函数,它必须是从 ISensorEvents 继承并执行 IUnknown 的整类函数。ISensorEvents 接口必须执行回调函数:

    	STDMETHODIMP OnEvent(ISensor *pSensor, REFGUID eventID, IPortableDeviceValues *pEventData);
    	STDMETHODIMP OnDataUpdated(ISensor *pSensor, ISensorDataReport *pNewData);
    	STDMETHODIMP OnLeave(REFSENSOR_ID sensorID);
    	STDMETHODIMP OnStateChanged(ISensor* pSensor, SensorState state);
    // Hook the Sensor for any DataUpdated, Leave, or StateChanged events.
    SensorEventSink* pSensorEventClass = new SensorEventSink();  // create C++ class instance
    ISensorEvents* pSensorEvents = NULL;
    // get the ISensorEvents COM interface pointer
    HRESULT hr = pSensorEventClass->QueryInterface(IID_PPV_ARGS(&pSensorEvents)); 
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Cannot query ISensorEvents interface for our callback class."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    hr = pSensor->SetEventSink(pSensorEvents); // hook COM interface of our class to Sensor eventer
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Cannot SetEventSink on the Sensor to our callback class."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    
    

    代码:为传感器设置一个 COM 事件接收器

    DataUpdated 事件处理程序以参数形式接收 SensorDataReport(以及初始化事件的传感器)。它调用表格中的 Invoke() 方法将这些条目转至委托函数。GUI 线程运行转至其 Invoke 队列的委托函数并将参数传输至该函数。委托函数将 SensorDataReport 的数据类型转换为所需的子类,获得数据域访问。数据域是使用 SensorDataReport 对象中的 GetDataField()方法提取的。每个数据域都必须将类型转换至它们的“预期”或“记录”数据类型(从使用 GetDataField()方法返回的一般/多态数据类型)。然后应用会在 GUI 中排列并显示数据。

    OnDataUpdated 事件处理程序以参数形式接收 SensorDataReport(以及初始化事件的传感器)。数据域是使用 SensorDataReport 对象中的 GetSensorValue()方法提取的。每个数据域都需要检查自身的 PROPVARIANT,以适合“预期”或“记录”数据类型。然后应用会在 GUI 中排列并显示数据。不需要使用同等的 C# 委托。这是因为所有 C++ GUI 函数(如此处显示的 ::SetWindowText()使用 Windows 消息传递将 GUI 更新转至 GUI 线程/消息循环(您主窗口或对话框的 WndProc)。

    STDMETHODIMP SensorEventSink::OnDataUpdated(ISensor *pSensor, ISensorDataReport *pNewData)
    {
        HRESULT hr = S_OK;
        if ((NULL == pNewData) || (NULL == pSensor)) return E_INVALIDARG;
        float fLux = 0.0f;
        PROPVARIANT pv = {};
        hr = pNewData->GetSensorValue(SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX, &pv);
        if (SUCCEEDED(hr))
        {
            if (pv.vt == VT_R4) // make sure the PROPVARIANT holds a float as we expect
            {
                // Get the lux value.
                fLux = pv.fltVal;
                // Update the GUI
                wchar_t *pwszLabelText = (wchar_t *)malloc(64 * sizeof(wchar_t));
                swprintf_s(pwszLabelText, 64, L"Illuminance Lux: %.1f", fLux);
                BOOL bSuccess = ::SetWindowText(m_hwndLabel, (LPCWSTR)pwszLabelText);
                if (bSuccess == FALSE)
                {
                    ::MessageBox(NULL, _T("Cannot SetWindowText on label control."), 
                        _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
                }
                free(pwszLabelText);
            }
        }
        PropVariantClear(&pv);
        return hr;
    }
    
    

    可以通过引用 SensorDataReport 对象的属性从 SensorDataReport 中提取数据域。这种情况仅适合于 .NET API 和 SensorDataReport 特殊子类的 “已知” 或 “预期” 数据域。在Win32/COM API 中,必须使用 GetDataField 方法。可以使用“动态数据域”为底层驱动程序/固件“搭载” SensorDataReports 内的任意“扩展/非预期”数据域。GetDataField 方法可用于提取这些内容。

    在 Windows 商店应用中使用传感器


    不同于桌面模式,WinRT 传感器 API 对每个传感器遵循一个通用模板:

    • 这通常是一个名为 ReadingChanged的单个事件,使用包含 Reading 对象(具有实际数据)的 xxxReadingChangedEventArgs 调用回调。加速计是一个例外;它还具有a Shaken事件。
    • 传感器类的硬件绑定实例使用 GetDefault()方法检索。
    • 可以通过 GetCurrentReading()方法执行轮询。

    Windows 商店应用一般使用JavaScript* 或 C# 编写。API 有不同的语言绑定,这导致 API 名称中的大写稍有不同以及事件处理方式也稍有不同。简化的 API 更易于使用,表 6 中列出了利弊。

    特性

    SensorManager

    没有 SensorManager 需要处理。应用使用 GetDefault() 方式获取传感器类实例。

    • 可能无法搜索任意传感器实例。如果计算机上存在多个特定类型的传感器,您将只能看到“第一个”。
    • 可能无法使用 GUID 搜索任意传感器类型或类别。厂商增值扩展不可用。

    事件

    应用仅关注 DataUpdated 事件。

    • 应用无法访问 Enter、Leave、 StatusChanged 或任意事件类型。厂商增值扩展不可用。

    传感器属性

    应用仅关注 ReportInterval 属性。

    • 应用无法访问其他属性,包括最有用的:敏感度。
    • 除了操控 ReportInterval 属性,Windows 商店应用无法调整或控制数据报告的流量。
    • 应用无法通过属性键访问任意属性。厂商增值扩展不可用。

    数据报告属性

    应用仅关注仅存在于每个传感器中的少数预定义数据域。

    • 应用无法访问其他数据域。如果传感器在 Windows 商店应用预计之外的数据报告中“搭载”其他已知数据域,该数据域将不可用。
    • 应用无法通过属性键访问任意数据域。厂商增值扩展不可用。
    • 应用无法在运行时查询传感器支持哪些数据域。它仅可以假定 API 预定义的数据域。

     表 6. Metro 风格应用的传感器 API 以及利弊

    总结


    Windows 8 API 支持开发人员在传统的桌面模式和全新的 Windows 商店应用界面下在不同的平台上使用传感器。在本文中,我们概述了开发人员在创建 Windows 8 应用时可以用到的传感器 API,重点讲述了适合桌面应用的 API 和示例代码。许多新的 Windows 8 API 在 Windows 8.1 操作系统中得到了进一步的改进,本文为许多 MSDN 上的相关示例提供了链接。

    附录


    不同外形的坐标系统
    Windows API 通过与 HTML5 标准(和Android*)兼容的方式报告 X、Y 和 Z 轴。它还称之为 “ENU” 系统,因为 X 面向虚拟的“东”(East)、Y 面向虚拟的“北”(North),而 Z 面向“上”(Up)。

    如要弄清楚旋转的方向,请使用“右手定则”:

       * 右手拇指指向其中一个轴的方向。
       * 沿该轴正角旋转将顺着您手指的曲线。

    这些是面向平板电脑或者手机(左)和蛤壳式电脑(右)的 X、Y 和 Z 轴。对于更复杂的外形(如可转换为平板的蛤壳式),“标准”方向是其处于“TABLET”(平板)状态时。

    如果您想要开发一个导航应用(如 3D 空间游戏),就需要在您的程序中从“ENU”系统进行转换。可通过矩阵乘法轻松完成该操作。Direct3D* 和 OpenGL* 等图形库都有可处理这一操作的 API。

    MSDN 资源


    作者介绍


    Gael Hofemeier
    Gael 是英特尔开发人员关系分部的软件工程师,主攻业务客户端技术。她拥有新墨西哥大学的数学理科学士和 MBA 学位,爱好徒步旅行、自行车和摄影。

    Deepak Vembar 博士
    Deepak Vembar 是英特尔实验室交互与体验研究 (IXR) 事业部的一位研究员。他的研究主要关注计算机图形交互和人机交互,包括实时图形、虚拟现实、触觉、眼睛追踪和用户交互等领域。在进入英特尔实验室之前,Deepak 是英特尔软件与服务事业部 (SSG) 的一位软件工程师,与电脑游戏开发人员一起针对英特尔平台优化游戏、传授异构平台优化课程和指南以及使用游戏演示编写大学课程(作为教学媒体在学校课程中使用)。 

    英特尔和 Intel 标识是英特尔在美国和/或其他国家的商标。
    版权所有 © 2012 英特尔公司。所有权利受到保护。
    *其他的名称和品牌可能是其他所有者的资产。


    附件

    下载 windows-8-1-sensor-dev-guide.pdf (999.63 KB)

  • Microsoft Windows* 8
  • Microsoft Windows* 8 Desktop
  • Sensors
  • Laptop
  • Tablet
  • URL
  • Education
  • Case Study: JOY Develops the First Musical-Visual Instrument with the Intel® Perceptual Computing SDK

    $
    0
    0

    by Karen Marcus

    Download Article

    Download Case Study: JOY Develops the First Musical-Visual Instrument with the Intel® Perceptual Computing SDK [PDF 1MB]

    TheBestSync, a development company based in China, is a creative and energetic team with years of production experience. The company’s focus is on providing comprehensive technological solutions and execution integrated in an artistic package. Two technologies they have used to achieve these ends are perceptual computing and augmented reality (AR) technology, which is the integration of digital information with live video or the user’s environment in real time. AR also uses location-based system information to enhance users’ expression and sense of identification.

    The team has created a few applications—mostly shooting and racing games—using Kinect* technology. For example, one of the games involves two people racing, using their hands to try and wave the fastest.

    TheBestSync’s participation in Phase I of the Intel® Perceptual Computing Challenge offered the opportunity to create a musical-visual instrument based on perceptual computing. The application, called JOY, is intuitive enough for anyone (even children) to use, including those without musical training. The team was excited to take their perceptual computing and AR experience and knowledge to the next level.

    A Musical Innovation: JOY

    JOY is the first perceptual music-visual instrument. Performers can use it to display different sound elements and visual effects by altering the gestures, distance, depth, and altitude of their hands in front of the Creative* Interactive Gesture Camera (the Camera). The result is a simultaneous audio and visual experience operated through simple performer control.

    JOY was conceived by Alpha Lam, CEO of TheBestSync. With experience as a sound engineer and musician, Lam wanted to develop an instrument that allows users to perform music without the tough learning curve. In researching perceptual computing technology, he thought of designing an instrument that enables users to just move their fingers, without touching any physical object, to play. He called together music and programming specialists within TheBestSync to work on this project. Getting started was a challenge, but after several rounds of testing, the team became convinced of the advantages of the Intel® Perceptual Computing Software Development Kit (SDK) as well as the new method of playing music based on actions anyone could make.

    In his vision for JOY, Lam sees users playing music at home, sharing their creativity with friends, or showcasing it at parties. In particular, says Lam, “Professional DJs or musicians can feel free to express their unlimited musical creativity on stage.” (See Figure 1.)


    Figure 1. JOY in use

    Development Using the Intel Perceptual Computing SDK

    As perceptual computing technology advances, gesture, facial, and voice recognition will fundamentally change how users interact with computers. At the time of the challenge, the Intel Perceptual Computing SDK was still in beta, and the plan was to take participants’ feedback to improve future releases of the SDK.

    When TheBestSync developers heard about Intel’s perceptual computing innovations, they saw a match with other software they were developing. The team conducted an in-depth study of the beta version of the Intel Perceptual Computing SDK, and decided to join Phase I of the Intel Perceptual Computing Challenge. Lam says, “We created JOY based on the advantages of perceptual computing and hope more people can get to know our design to promote perceptual computing instrument development.”

    JOY was designed specifically for the Challenge (see Figure 2). Lam says, “We considered the status of the Intel Perceptual Computing SDK and fully used the gesture control function. We hadn’t used the same range of gestures in previous apps; those developed for JOY were new for us. We wanted to let users get to know the advantages of perceptual computing through our application.”


    Figure 2. A menu screen in JOY

    Deciding Which Elements to Include

    The team tried various input modes during the development process, including face, gesture, and voice modules. Lam explains, “When users tilted their head or turned it to the left or right, the face module sometimes failed to recognize the face. In this case, the users would need to turn their head back to the front to reactivate the recognition. I believe Intel is trying hard to improve this issue in the SDK. We used a program algorithm to improve this but still weren’t able to fully solve the problem.”

    Lam adds, “For the voice module, the recognition was a bit slow. The new version of the SDK improves the voice recognition a lot, including increased language options and recognition capability.”

    In the version of the application submitted for the Challenge, gesture recognition was the only input mode used. Lam notes, “We tested the application and found that the gesture control was the most stable part, the part that staged the best, and the easiest part for users to control. Therefore, we designed the application control mode based on hand manipulation and music manipulation. We modified the creative direction based on making the experience of using the application as user-friendly as possible.”

    The initial idea for gesture-controlled functions came from a combination of Lam’s understanding of music performance and his knowledge of perceptual computing. Application improvement came from user testing. Lam notes, “In our early efforts with gesture recognition, there were errors in left/right hand recognition. For example, the default setting was for the first hand recognized to be the left hand. So, when players raised their right hand first to start the game, there was a recognition error, and all following actions got swapped between the right and left hands. To resolve this, we added criteria to assist with recognition. For example, we programmed the application to compare the elbow identification point versus the palm identification point; if the x-axis coordinates of the elbow identification point were bigger than the palm point, it was seen as the right hand. We realize this solution may still need to be improved.”

    In addition, there were miscalculations as to how many fingers were being captured. The team filtered the finger quantity calculation to ensure stable recognition.

    To determine which gestures worked best, the team did several rounds of testing to ensure that each one was stable and able to control the application continuously and individually. “For each gesture,” says Lam, “we tried to make it as intuitive and easy to associate as possible. For example, changing the distance between right and left hands horizontally activates reverb, while changing it vertically activates an echo effect.” The team found that five fingers all open worked best for recognition; waving or circling hands was also stable.

    The interface used to show how many fingers are being captured was inspired, says Lam, by stage lights: “We tried our best not to destroy the overall visual aesthetic by showing the fingers while providing clear enough hints to players.”

    Design Challenges

    The biggest challenge in the development process was making the finger controls more user-friendly. The team implemented several adjustments:

    • To ensure the application accurately detected the numbers of validated fingers, the team leveraged a screening technique to filter out the unstable part and provide precise finger detection. A change to sampling frequency was not needed, but, says Lam, “If we screened three frames with the same result, it was confirmed as an effective recognition, and the application filtered out the invalid data.”
    • To control the mix of gestures and the order in which they occur, the team connected musical tracks to the number of fingers recognized. The fingers captured activate corresponding musical quantities and sequences.
    • To maintain effective program fluency when changing gesture movements, the team filtered data to reduce recognition errors. They also added a preliminary judgment, which enables the application to judge which function the user is controlling.

    Testing

    To test this experience, the team invited two different groups of people to try the application: users with no musical background and professional musicians. They wanted to ensure that the application was user-friendly enough for those with a limited music background.

    In a simple introduction that explained the way the application works, the team told testers: “Each of the 10 fingers represents a track of music, so, 10 tracks are possible to use for creating different sounds. Users can remix them through changes to finger combinations.” Following this introduction, both groups of users could easily manipulate the application.

    The musician testers expected more functions and more ways of manipulating the application. Conversely, the typical response from those without musical knowledge was that it was a cool application and a brand new experience.

    Future Plans

    Though the team developed the application for the Challenge, they continue to improve it. Lam explains: “In the new version, we added a face landmark application programming interface (API) to use different input modes. We plan to add touch screen and keyboard input to enable users to switch between different devices and to provide a smoother, easier manipulation experience.” Lam adds, “We will enable the application to switch from gesture to touch screen, and we will switch the status.” The application will also switch to touch mode when users intentionally change from gesture input to the touch screen, but if no follow-up action is taken after a switch is detected, the application will remain in the same mode (see Figure 3).


    Figure 3. JOY with facial recognition

    In addition, the team has added music factor recording—an import function—music play list editing, and music sharing.

    The team’s goals for JOY include popularizing Camera functionality and commercializing the application.

    Development Experience

    The team is pleased with their success in applying the perceptual music and visual playing concept to JOY. Lam believes this is the first time these components have all been brought together in one application. Another achievement was redefining the perceptual instrument to synchronize audio and visual experiences.

    They learned some valuable lessons through the development experience. As a result of the development, says Lam, “TheBestSync now has an in-depth understanding of the different perceptual computing APIs as well as the lessons we learned during the testing process. Now we can design and create applications based on what we know about the advantages and disadvantages of perceptual computing.” He adds, “We have developed some AR applications for user interaction, such as one that can ‘see through’ the laptop (like a computer X-ray) to create a fun environment for users to interact with and better showcase products.”

    The team urges other developers to dig into the details to understand the platform and conduct tests during the idea-generation stage to showcase the advantages of perceptual computing and avoid the disadvantages. Lam notes, “The Intel Perceptual Computing SDK is still under development, and we can foresee additional functional improvements in the future. Keeping an eye on the latest updates is crucial to development.”

    Lam reflects that the emergence of perceptual computing offers more possibilities for music performance and creation. He says, “It’s a brand new experience in music performance. Anyone who likes music, even without instrument or music knowledge, can flexibly, simply, and intuitively manipulate sound using the perceptual interface, then create and perform various styles of music. It is a true music-creation tool that has no boundaries.”

    Further, says Lam, “We see perceptual computing as a more intelligent human-computer interaction. It allows the computer to ‘sense’ users, while users have a more intuitive and natural way to manipulate the computer. It’s similar to going from using a mouse to using a touch screen; this is another innovation in input methods. We believe perceptual computing will redefine the relationship between human and computer, enable more input mode possibilities, and bring the user experience into a brand new world.”

    Other Development Considerations

    The team used the Unity* 3D development engine to quickly integrate three-dimensional (3D) models and animation into the application. Lam notes, “The Unity Asset Store also provides sufficient plug-ins to make the development more efficient.”

    To program JOY, the team used C# and C++. They chose C# because it is compatible with Unity 3D. Lam says, “C# can quickly and conveniently adapt to Unity development. C# provides data encapsulation to raw data, which is convenient for developers and shortened our development duration.”

    C++ was selected because the Unity interface included in the Intel Perceptual Computing SDK was insufficient to fulfill development needs. For example, says Lam, “There was no prompt function when users’ hands exceeded the detectable range. We found that age, gender, and facial expression functions were also lacking.” So, the team used C++ to extend the Unity API beyond the Intel Perceptual Computing SDK and complete the development. They found that it provided convenient function extension.

    Other tools included:

    • Autodesk Maya* 3D animation, which has powerful design functions that are easily adaptable to Unity
    • Avid Pro Tools|HD, which has powerful sound recording and editing functions that provide a higher sound quality
    • Apple Logic Pro, which enables flexible music design to establish sufficient sound resources
    • NGUI Unity plug-in, which efficiently brought user interface (UI) artistic design to the program

    Company

    Previous TheBestSync application development covered numerous apps, including AR apps, games, UI designs, and web sites. As for the future, TheBestSync will continue to advance the development of perceptual computing by increasing its labor investment, inviting third-party investors, and using other resources to support long-term development. Lam says, “With our strong production background and advanced technology development, we aim to provide a perfect user experience by integrating perceptual technology into innovative applications.”

    The company is also participating in Phase 2 of the Intel Perceptual Computing Challenge.

    To learn more about TheBestSync, go to www.thebestsync.com.

    About the Author

    Karen Marcus, M.A., is an award-winning technology marketing writer who has 16 years of experience. She has developed case studies, brochures, white papers, data sheets, solution briefs, articles, web site copy, video scripts, and other documents for such companies as Intel, IBM, Samsung, HP, Amazon Web Services, Amazon Webstore, Microsoft, and EMC. Karen is familiar with a variety of current technologies, including cloud computing, IT outsourcing, enterprise computing, operating systems, application development, digital signage, and personal computing.

    Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
    Copyright © 2013 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others

  • Intel® Perceptual Computing Challenge.
  • ULTRABOOK™
  • Gesture Recognition
  • applications
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • C#
  • C/C++
  • Beginner
  • Perceptual Computing
  • Microsoft Windows* 8 Desktop
  • Sensors
  • User Experience and Design
  • Laptop
  • Desktop
  • URL
  • Implementing Face Detection in Android

    $
    0
    0

    Download sample code

    Face Detection Sample [PDF 206KB]

    Introduction

    Face detection is an important functionality for many categories of mobile applications. It can provide additional search capabilities in photo catalogs, social applications, etc.

    Face detection is also a first step in implementing face recognition functionality.

    This article will review a standard Android API to detect faces on a saved image.

    Implementation

    Android SDK contains an API for Face Detection: android.media.FaceDetector class. This class detects faces on the image. To detect faces call findFaces method of FaceDetector class. findFaces method returns a number of detected faces and fills the FaceDetector.Faces[] array. Please note, that findFaces method supports only bitmaps in RGB_565 format at this time.

    Each instance of the FaceDetector.Face class contains the following information:

    • Confidence that it’s actually a face – a float value between 0 and 1.
    • Distance between the eyes – in pixels.
    • Position (x, y) of the mid-point between the eyes.
    • Pose rotations (X, Y, Z).

    Unfortunately, it doesn’t contain a framing rectangle that includes the detected face.

    Sample code

    Here is sample source code for face detection. This sample code enables a custom View that shows a saved image from an SD Card and draws transparent red circles on the detected faces.

    Source code:

    class Face_Detection_View extends View {
            private static final int MAX_FACES = 10;
            private static final String IMAGE_FN = "face.jpg";
            private Bitmap background_image;
            private FaceDetector.Face[] faces;
            private int face_count;
    	
            // preallocate for onDraw(...)
            private PointF tmp_point = new PointF();
            private Paint tmp_paint = new Paint();
    	
            public Face_Detection_View(Context context) {
                    super(context);
                    // Load an image from SD Card
                    updateImage(Environment.getExternalStorageDirectory() + "/" + IMAGE_FN);
            }
    	
            public void updateImage(String image_fn) {
                    // Set internal configuration to RGB_565
                    BitmapFactory.Options bitmap_options = new BitmapFactory.Options();
                    bitmap_options.inPreferredConfig = Bitmap.Config.RGB_565;
    	
                    background_image = BitmapFactory.decodeFile(image_fn, bitmap_options);
                    FaceDetector face_detector = new FaceDetector(
                                    background_image.getWidth(), background_image.getHeight(),
                                    MAX_FACES);
    	
                    faces = new FaceDetector.Face[MAX_FACES];
                    // The bitmap must be in 565 format (for now).
                    face_count = face_detector.findFaces(background_image, faces);
                    Log.d("Face_Detection", "Face Count: " + String.valueOf(face_count));
            }
    	
            public void onDraw(Canvas canvas) {
                    canvas.drawBitmap(background_image, 0, 0, null);
                    for (int i = 0; i < face_count; i++) {
                            FaceDetector.Face face = faces[i];
                            tmp_paint.setColor(Color.RED);
                            tmp_paint.setAlpha(100);
                            face.getMidPoint(tmp_point);
                            canvas.drawCircle(tmp_point.x, tmp_point.y, face.eyesDistance(),
                                            tmp_paint);
                    }
            }
    }

    This sample will not scale the picture to fit on the screen for simplicity. In real applications you will need to scale the picture or scale the FaceDetector.Face attributes to match the provided screen space.

    Conclusion

    Android SDK provides a standard API for face detection on a saved image. To detect a face in the camera preview frame consider using the Camera.FaceDetectionListener class.

    Unfortunately, Android SDK doesn’t support API to implement face recognition functionality.

    References

    FaceDetector class:
    http://developer.android.com/reference/android/media/FaceDetector.html

    FaceDetector.Face class:
    http://developer.android.com/reference/android/media/FaceDetector.Face.html

    Camera.FaceDetectionListener class:
    http://developer.android.com/reference/android/hardware/Camera.FaceDetectionListener.html

    Intel, the Intel logo, Atom, and Core are trademarks of Intel Corporation in the U.S. and/or other countries. Copyright © 2013 Intel Corporation. All rights reserved. *Other names and brands may be claimed as the property of others.

  • face detection
  • AndroidSampleCode
  • Icon Image: 

  • Sample Code
  • Graphics
  • Sensors
  • User Experience and Design
  • Java*
  • Android*
  • Developers
  • Android*
  • Using JavaFX to Implement Multi-touch with Java* on Windows* 8 Desktop

    $
    0
    0

    Download as PDF

    Download source code

    This document covers the use of JavaFX to easily add multi-touch support to Windows 8 Desktop applications written in Java*. It starts by presenting an overview of JavaFX before covering two approaches to using JavaFX: using Java APIs only and using Java in conjunction with FXML. The examples are displayed in the context of a simple application tool for manipulating an image. The application is written in Java and is designed for Windows 8 devices.

    1. Introduction

    Multi-touch, gesture-based user interface support is inherent in Windows 8 applications written in languages that use the native Windows libraries; however, this option is not available to Java-based applications. In this case, an alternative development framework must be used. This white paper and accompanying sample applications cover JavaFX, an open source solution for adding multi-touch support to Windows 8 Desktop applications written in Java.

    JavaFX is a set of Java packages designed to support the development of rich, cross-platform applications written in Java. These packages cover user interface controls, media streaming, embedded web content, and a hardware-accelerated graphics pipeline. JavaFX also includes multi-touch support, which we will discuss in this paper. JavaFX is a Java library, meaning it can be called directly from any Java code, but it also supports a declarative markup language called FXML, which can be used to construct a JavaFX user interface. We demonstrate both methods in this paper. More information on JavaFX can be found at http://www.oracle.com/technetwork/java/javafx/documentation/index.html.

    2. Example Applications

    The code samples used in this document are from two Java applications: one demonstrates how to use JavaFX’s Java APIs, while the second uses Java in conjunction with FXML. To maintain readability of the source code, the demonstration applications are very simple, consisting of an image displayed in a main window that users can manipulate using common single and multi-touch gestures. At any point a user can reset the image to its starting condition. You can download the full source for the demo applications and then try it yourself, or use it as reference material to create your own application.

    Figure 1: Sample application with multi-touch interaction

    The following table describes the user interactions for the sample application.

    Table 1: Supported Actions

    Action

    Result

    Touch and drag image

    Move image to a new location

    Pinch on image

    Decrease size of image

    Spread on image

    Increase size of image

    Two-finger rotation on image

    Rotate image

    Touch [Reset] button

    Restore image back to original location, size, and orientation

    3. Development Environment

    The example applications are Windows 8 Desktop applications, suitable for any tablet, convertible, or Ultrabook™ device with multi-touch support. JavaFX is fully integrated in the Java 7 Runtime Environment.

    Windows* 8

    Language

    Java* 7 or newer

    Multi-touch library

    JavaFX 2.2

    IDE

    Netbeans 7.3.1

    Figure 2: Development environment

    4. JavaFX Scene Structure

    To display a user interface, JavaFX uses a scene graph design, an approach based on the hierarchical parent-child relationship of the elements that compose a scene. The base class for the JavaFX scene graph API is javafx.scene, and the base class for defining a single scene entity is javafx.scene.Node.

    Looking deeper into the class structure, the path splits in two major directions:

    • javafx.scene.Parent ? An abstract class that can contain children, it further leads into implementation of controls, like Button or Label classes, or container constructs, like javafx.scene.Region.
    • javafx.scene.Shape– This class is the starting point for various geometrical shape implementations, eventually leading to leaf nodes in the scene of type element.

    Internally, the scene is composed of classes derived from Node and organized in a tree data structure. The tree is traversed by the painting engine when it is triggered by an event called a Pulse. Pulse events are emitted because of a trigger from the user or operating system or due to a time-triggered paint event. The painting engine in JavaFX is called Prism, and it abstracts OpenGL*, DirectX*, or, as a failsafe, Java2D painting engines.

    In most cases, when building an application using the JavaFX framework, we will use concrete implementations of classes coming from the inheritance paths listed above, and not from the sub-classing Node directly. However, Node is the class that provides the API for registering event handlers for touch and gesture events, therefore it will be the main focus of this article.

    5. Options for Building User Interfaces Using JavaFX

    JavaFX provides two options for creating user interfaces. Depending on our needs, we can synthesize the user interface by creating instances of the user interface component classes, or we can take a more visual approach by using Scene Builder to generate FXML templates. These two methods can be combined to form a flexible approach for designing the parts of the user interface. For more in depth information on creating user interfaces with JavaFX, please refer to the JavaFX tutorial (http://docs.oracle.com/javafx/).

    6. Gestures in a Scene Defined Using Only Java API

    The Node class provides an API to register callback event handlers. There are two ways of registering event handlers. The first is to use the setEventHandler function as shown in the following example:

    setEventHandler(SwipeEvent.ANY, new EventHandler<SwipeEvent>() {
       	 @Override
      		 public void handle(SwipeEvent t) {
      		…
    		<event handler logic>
    		…
    
      		 }
      		 t.consume();
      	 }
     });	
     

    The setEventHandler function takes two parameters:

    1. The type of the event, which is a concrete instance of EventType<T>. The convention is that the event types are defined as static fields of the class to which they belong; for example, the SwipeEvent class would have:
    public class SwipeEvent extends GestureEvent {
    public static final EventType<SwipeEvent> ANY;
    public static final EventType<SwipeEvent> SWIPE_LEFT;
    public static final EventType<SwipeEvent> SWIPE_RIGHT;
    public static final EventType<SwipeEvent> SWIPE_UP;
    public static final EventType<SwipeEvent> SWIPE_DOWN;
    ...
    
    1. The second parameter is a pointer to an instance of a class implementing a concrete version of the EventHandler<T> interface. In this example, an instance of the anonymous class was used, but the origin of the pointer is a design decision.

    The second option for registering an event or gesture handler is through a set of convenience functions, also provided by the Node class. The naming convention follows setOn<name_of_the_event>, for example setOnRotate(...). Since a convenience function corresponds to a single event, only one parameter is required, a pointer to an instance of a class that implements a concrete version of the EventHandler<T> interface. For example:

    setOnTouchPressed(new EventHandler<TouchEvent>() {
    	 @Override
    	 public void handle(TouchEvent t) {
    		…
    		event handler logic
    		…
    		t.consume();
    	 }
    });
    
    

    The selection and use of these two approaches depend on the developer. In our case, setEventHandler was used for SwipeEvent while convenience functions were used for the rest of the gestures. For SwipeEvent, we passed SwipeEvent.ANY as the triggering event type and then detected the actual SwipeEvent type. This allowed us to keep the response logic in a single function instead of four almost identical convenience functions. On the other hand, implementing the logic for the other gestures using convenience functions produced code that was easier to study.

    6.1. Touch Events

    In our example, we use touch events to allow a user to drag the selected rectangle around the scene. The application first waits for the user to touch and hold within the borders of the rectangle and then to move her finger around the scene.

    To implement such application behavior we have used three event types related to touch, which are:

    • Touch begin– Setup the initial state that consists of the current position of the touch and a flag to indicate the event is in motion.
    • Touch move– Retrieve the current position of the touch and calculate the translation of the rectangle position.
    • Touch end– When the user lifts her finger, we clear the event-in-motion flag.

    The touch begin event is implemented as follows:

    setOnTouchPressed(new EventHandler<TouchEvent>() {
    	@Override
    		if (moveInProgress == false) {
    			if (m_container.getRegisterredItem() != 
    			  MovableRectangle.this) {
    				 m_container.unregisterItem();
    				 m_container.registerItem(MovableRectangle.this);
    			}
    
    			 moveInProgress = true;
    			 touchPointId = t.getTouchPoint().getId();
    
    			 prevPos = new Point2D(t.getTouchPoint().getSceneX(),
    			 t.getTouchPoint().getSceneY());
    			 System.out.println("TOUCH BEGIN " + t.toString());
    		 }
    
    		 t.consume();
    	}
    });
    
    

    The two key points to notice are:

    • The call to the consume function – Event handlers follow a phase of event processing called bubbling, where an event object is passed up to the parent object if it is not consumed by the current event handler. To prevent the parent container of our rectangle from processing this event (which would be undesirable, in this case), we must call the consume method of the event object.
    • Proper handling of multiple concurrent touch events – On multi-touch devices we can get input from multiple touch events occurring at the same time. In our case we do not want our rectangle’s drag behavior to be interrupted by other touch events occurring at the same time. To distinguish touch events, JavaFX uses a unique touch event ID, which is a component of the touch event object. In our application, we save this ID when the touch begins and react only to touch move events with a similar ID.

    The second part of our implementation is the handler that responds to the touch’s move event. Its implementation is shown below:

    setOnTouchMoved(new EventHandler<TouchEvent>() {
    	 @Override
    	 public void handle(TouchEvent t) {
    		 if (moveInProgress == true &&
    		   t.getTouchPoint().getId() == touchPointId) {
    			Point2D currPos = new Point2D(
    			  t.getTouchPoint().getSceneX(),
    			  t.getTouchPoint().getSceneY();
    			double[] translationVector = new double[2];
    			translationVector[0] = currPos.getX() - prevPos.getX();
    			translationVector[1] = currPos.getY() - prevPos.getY();
    
    			setTranslateX(getTranslateX() + translationVector[0]);
    			setTranslateY(getTranslateY() + translationVector[1]);
    
    			prevPos = currPos;
    		}
    		t.consume();
    	}
    });
    
    

    Similar to the previous case, we consume the event after processing it. As mentioned before, we only calculate position translations for touch events with an ID equal to the ID value of the original touch (saved by the touch begin handler).

    The final part is the touch end handler. The purpose of this handler is to clear the move-in-progress flag, completing the touch event. The next touch begin event will register another touch ID and start the process over.

    setOnTouchReleased(new EventHandler<TouchEvent>() {
    	 @Override
    		public void handle(TouchEvent t) {
    		if (t.getTouchPoint().getId() == touchPointId) {
    			moveInProgress = false;
    		}
    		t.consume();
    	 }
    });
    
    

    6.2. Gestures

    There are currently four types of gesture events supported in JavaFX:

    1. Rotate
    2. Scroll
    3. Swipe
    4. Zoom

    Architectural Considerations

    When dealing with these types of gestures, it is important to carefully choose the node on the scene that will register the event handlers. Let’s look at the Zoom gesture.

    Our first thought is that if we are going to modify the size of our rectangle object (zoom), then it should be the rectangle that registers the event handler and processes the event. But what about the situation where the rectangle is small enough that a proper zoom gesture cannot be performed? Having the rectangle handle the event in this case, leads to a situation where we cannot un-zoom our rectangle. A better choice is to use the Pane element, which is a background canvas stretched to the size of the screen. This class receives callback information from its children when they are selected, and if a gesture occurs, performs the proper transformation on the selected node.

    Rotate Gesture

    The rotate event handler is implemented as follows:

    setOnRotate(new EventHandler<RotateEvent>() {
    	@Override
    	public void handle(RotateEvent t) {
    		if (currentSelection != null) {
    			Node selNode = currentSelection.getCorrespondingNode();
    			selNode.setRotate(selNode.getRotate() + t.getAngle());
    		}
    	t.consume();
    	}
    });
    
    

    The implementation follows the convention of using the convenience function to register the callback handler. The RotateEvent class provides all parameters necessary to describe the gesture. The Node class provides a set of helper functions for both 2-D and 3-D. An alternative method is also available to allow you to stack multiple transformations by using the javafx.scene.transform.Transform type and deriving from it classes that implement more specific types of transformations, like javafx.scene.transform.Translate.

    Zoom and Scroll Gestures

    The zoom and scroll gesture event handlers are implemented similarly. Here is the zoom event handler:

    setOnZoom(new EventHandler<ZoomEvent>() {
    	@Override
    	public void handle(ZoomEvent t) {
    		if (currentSelection != null) {
    			Node selNode = currentSelection.getCorrespondingNode();
    			selNode.setScaleX(selNode.getScaleX() * t.getZoomFactor());
    			selNode.setScaleY(selNode.getScaleY() * t.getZoomFactor());
    		}
    		t.consume();
    	}
    });
    
    

    And the scroll gesture event handler:

    setOnScroll(new EventHandler<ScrollEvent>() {
    	@Override
    	public void handle(ScrollEvent t) {
    		if (selectedGesture == GestureSelection.SCROLL && 
    			  currentSelection != null) {
    			Node selNode = currentSelection.getCorrespondingNode();
    			selNode.setTranslateX(selNode.getTranslateX() + 
    			  (t.getDeltaX() / 10.0));
    			selNode.setTranslateY(selNode.getTranslateY() + 
    			  (t.getDeltaY() / 10.0));
    		}
    		t.consume();
    	}
    });
    
    

    Swipe Gestures

    For the last gesture type on the list, swipe, the application took a slightly different approach. A single swipe event has four different convenience functions that apply to the possible directions of the swipe gesture: setOnSwipeLeft, setOnSwipeRight, setOnSwipeUp, and setOnSwipeDown. In our application the translation direction is defined by the direction of the swipe, which makes it reasonable to implement using setEventHandler instead of the convenience functions. This results in a very simple implementation, as you can see in the following code:

    setEventHandler(SwipeEvent.ANY, new EventHandler<SwipeEvent>() {
    	@Override
    	public void handle(SwipeEvent t) {
    		if (selectedGesture == GestureSelection.SWIPE && 
    		  currentSelection != null) {
    			Node selNode =currentSelection.getCorrespondingNode();
    			TranslateTransition transition = new 
    			TranslateTransition(Duration.millis(1000), selNode);
    			if (t.getEventType() == SwipeEvent.SWIPE_DOWN) {
    				transition.setByY(100);
    			} else if (t.getEventType() == SwipeEvent.SWIPE_UP) {
    				transition.setByY(-100);
    			} else if (t.getEventType() ==SwipeEvent.SWIPE_LEFT) {
    				transition.setByX(-100);
    			} else if (t.getEventType()==SwipeEvent.SWIPE_RIGHT) {
    				transition.setByX(100);
    			}
    			transition.play();
    
    		}
    		t.consume();
    	}
    });
    
    

    6.3. Cautions When Working with Gestures

    We want to conclude this section by discussing two cases related to gesture events of which developers should be aware.

    First, we note that swipe and scroll events are very similar in nature, and in fact, a swipe gesture will trigger the scroll event at the same time. In our example we give the user the option to choose which should be recognized, so both can be easily evaluated

    Second, a situation that might produce undesirable effects is when a user drags the rectangle on the screen. Like a swipe event, this will also trigger a scroll event, which can generate unexpected motion of the rectangle. To prevent this situation from happening, the application consumes both the scroll and swipe events from the rectangle at the same time to prevent them from bubbling to the canvas.

    setOnScroll(new EventHandler<ScrollEvent>() {
    	@Override
    	public void handle(ScrollEvent t) {
    		t.consume();
    	}
    });
    
    setEventHandler(SwipeEvent.ANY, new EventHandler<SwipeEvent>() {
    	@Override
    	public void handle(SwipeEvent t) {
    		t.consume();
    	}
    });
    
    

    7. Defining a Scene Using Java and FXML

    In addition to the low-level approach when building user interfaces, JavaFX provides another option, one based on an XML-syntax language called FXML. FXML is a higher-level, declarative markup language used to describe the user interface for a JavaFX application. Developers can write FXML directly or use the JavaFX Scene Builder to create FXML markup. The advantages of using FXML are that it cleanly separates user interface design from application logic and it gives user interface designers an easier way to be involved in the development process.

    When using FXML, the FXML file is dynamically loaded into the application, where it is converted in to a tree structure just as if you built it entirely using Java code. The resulting root Node element can then be connected directly onto a scene, or plugged in as another part of a bigger project.

    7.1. Defining Scene Elements in JavaFX

    There are different approaches when building user interfaces in FXML; this document will cover the following two:

    • Building a component hierarchy from ready-to-use classes
    • Building custom components using a root element defined in Java

    The first approach, which can also use custom components, uses previously defined tags that implement a specific user interface element. For example:

    <Pane id="StackPane" fx:id="touchPane" onRotate="#onRotate"
     onScroll="#onScroll" onSwipeDown="#onSwipe" onSwipeLeft="#onSwipe"
     onSwipeRight="#onSwipe" onSwipeUp="#onSwipe" onZoom="#onZoom"
     prefHeight="1000.0" prefWidth="1000.0" xmlns:fx=http://javafx.com/fxml
     fx:controller="jfxgestureexample2.TouchPaneController">
    	<children>
    		<HBox fx:id="buttons" fillHeight="false" prefHeight="30.0"
    		  prefWidth="451.0" spacing="3.0">
    			<children>
    				...CUT...
    			</children>
    			<padding>
    				<Insets bottom="3.0" left="3.0" right="3.0"
    				 top="3.0" />
    			</padding>
    		</HBox>
    	</children>
    </Pane>
    
    

    FXML files like this example are parsed using FXMLLoader, which creates a tree object structure and adds it to the scene:

    Parent root = FXMLLoader.load(getClass().getResource("TouchPane.fxml"));
    
    	Scene scene = new Scene(root);
    	stage.setScene(scene);
    	stage.show();
    
    

    The second approach is useful when we want to create a custom user interface component or a user interface portion with a custom-class Root element. In this case, we use fx:root and specify the type of the root class. Note, the type does not have to be one of our classes, but it has to be a type within the inheritance path.

    <fx:root type="jfxgestureexample2.MovableElementController"
      onScroll="#onScroll" onSwipeDown="#onSwipe" onSwipeLeft="#onSwipe"
      onSwipeRight="#onSwipe" onSwipeUp="#onSwipe" onTouchMoved="#onTouchMoved"
      onTouchPressed="#onTouchPressed" onTouchReleased="#onTouchReleased"
      prefHeight="50.0" prefWidth="50.0" styleClass="mainFxmlClassUnselected" 
      xmlns:fx="http://javafx.com/fxml">
    	<stylesheets>
    		<URL value="@movableelement.css" />
    	
    </fx:root>
    
    

    The actual setup of the root element, along with the controller class, occurs after the FXML file has been loaded but before it is instantiated. In our example application, this happens inside the MovableElementController class, which is both the controller and root object itself. In addition, it implements the rectangle that is visible on the scene.

    public MovableElementController(ISelectableItemContainer container) {
    	super();
    	m_container = container;
    
    	FXMLLoader loader = new
    	  FXMLLoader(getClass().getResource("MovableElement.fxml"));
    	loader.setRoot(this);
    	loader.setController(this);
    
    	try {
    		loader.load();
    	} catch (IOException exception) {
    		throw new RuntimeException(exception);
    	}
    }
    
    

    7.2. Connecting FXML and Back-end Code

    The next step is to connect events from FXML to the back-end code and to connect the back-end code to the user interface. FXML provides a very convenient API for exposing the elements present in the FXML document and connecting callback functions to those elements. It heavily relies on Java annotations and the reflection mechanism, combined with JavaFX properties and bindings.

    First, on the FXML side, each type of tag exposes a set of properties to which we can assign handlers that are present in the controller class. The second part of the definition has either the controller class set directly in FXML using the fx:controller property or assigned dynamically, similar to the previous example.

    <Pane id="StackPane" fx:id="touchPane" onRotate="#onRotate"
      onScroll="#onScroll" onSwipeDown="#onSwipe" onSwipeLeft="#onSwipe"
      onSwipeRight="#onSwipe" onSwipeUp="#onSwipe" onZoom="#onZoom"
      prefHeight="1000.0" prefWidth="1000.0" xmlns:fx=http://javafx.com/fxml
      fx:controller="jfxgestureexample2.TouchPaneController">
    
    

    The name of the handlers must match methods found in the controller class and are marked with a hash (#) symbol preceding the name. On the Java code side, such methods are declared using the @FXML annotation:

    @FXML
    public void onZoom(ZoomEvent t) {
    	if (currentSelection != null) {
    		Node selNode = currentSelection.getCorrespondingNode();
    		selNode.setScaleX(selNode.getScaleX() * t.getZoomFactor());
    		selNode.setScaleY(selNode.getScaleY() * t.getZoomFactor());
    	}
    	t.consume();
    }
    
    

    The specified method has the same declaration structure as the handle method from the EventHandler interface and it accepts proper parameters.

    To expose an element present in the FXML document to the controller class we have to assign it and its ID name, as in this example:

    <ToggleButton id="setScrollBtn" fx:id="setSwipeBtn" mnemonicParsing="false"
      text="Swipe" toggleGroup="$gestureSelectionGroup" />
    
    

    Plus we need to declare a property matching the type and name used in the controller class:

    public class TouchPaneController implements Initializable,
      ISelectableItemContainer {
    
    	...
    	@FXML private Pane touchPane;
    	@FXML private HBox buttons;
    	@FXML ToggleButton setScrollBtn;
    	@FXML ToggleButton setSwipeBtn;
    	@FXML ToggleGroup gestureSelectionGroup;
    	...
    
    

    This is all that is required from the developer; the framework will handle the binding.

    7.3. Defining Functions for Touch Events and Gestures

    The application logic for the FXML example is the same as the direct Java API example, so the actual gestures and touch event handlers are almost identical. To keep you from referring back to the earlier sections we will present the code from the FXML implementation here. It is worth mentioning that the same callback function is assigned to all four swipe actions in the user interface declaration in the FXML file.

    public class MovableElementController extends Pane implements ISelectableItem
    {
    
    	... CUT...
    
    	@FXML
    	public void onTouchPressed(TouchEvent t) {
    		if (moveInProgress == false) {
    			if (m_container.getRegisterredItem() != 
    			  MovableElementController.this) {
    				m_container.unregisterItem();
    
    				m_container.registerItem(
    				  MovableElementController.this);
    			}
    
    			moveInProgress = true;
    			touchPointId = t.getTouchPoint().getId();
    
    			prevPos = new Point2D(t.getTouchPoint().getSceneX(),
    			  t.getTouchPoint().getSceneY());
    			System.out.println("TOUCH BEGIN " + t.toString());
    		}
    
    		t.consume();
    	}
    
    	@FXML
    	public void onTouchMoved(TouchEvent t) {
    		if (moveInProgress == true && t.getTouchPoint().getId() ==
    		  touchPointId) {
    
    			Point2D currPos = new
    			  Point2D(t.getTouchPoint().getSceneX(),
    			  t.getTouchPoint().getSceneY());
    			double[] translationVector = new double[2];
    			translationVector[0] = currPos.getX() - prevPos.getX();
    			translationVector[1] = currPos.getY() - prevPos.getY();
    
    			setTranslateX(getTranslateX() + translationVector[0]);
    			setTranslateY(getTranslateY() + translationVector[1]);
    
    			prevPos = currPos;
    		}
    		t.consume();
    	}
    
    	@FXML
    	public void onTouchReleased(TouchEvent t) {
    		if (t.getTouchPoint().getId() == touchPointId) {
    			moveInProgress = false;
    			System.err.println("TOUCH RELEASED " + t.toString());
    		}
    	
    		t.consume();
    	}
    
    	... CUT...
    }
    
    
    public class TouchPaneController implements Initializable,
      ISelectableItemContainer {
    
    	... CUT...
    
    	@FXML private Pane touchPane;
    	@FXML private HBox buttons;
    	@FXML ToggleButton setScrollBtn;
    	@FXML ToggleButton setSwipeBtn;
    	@FXML ToggleGroup gestureSelectionGroup;
    
    	... CUT...
    
    	@FXML
    	public void onScroll(ScrollEvent t) {
    		if (selectedGesture == GestureSelection.SCROLL &&
    		  currentSelection != null) {
    			Node selNode = currentSelection.getCorrespondingNode();
    			selNode.setTranslateX(selNode.getTranslateX() +
    			  (t.getDeltaX() / 10.0));
    			selNode.setTranslateY(selNode.getTranslateY() +
    			  (t.getDeltaY() / 10.0));
    		}
    		
    		t.consume();
    	}
    
    	@FXML
    	public void onZoom(ZoomEvent t) {
    		if (currentSelection != null) {
    			Node selNode = currentSelection.getCorrespondingNode();
    			selNode.setScaleX(selNode.getScaleX() * t.getZoomFactor());
    			selNode.setScaleY(selNode.getScaleY() * t.getZoomFactor());
    		}
    
    		t.consume();
    	}
    
    	@FXML
    	public void onRotate(RotateEvent t) {
    		if (currentSelection != null) {
    			Node selNode = currentSelection.getCorrespondingNode();
    			selNode.setRotate(selNode.getRotate() + t.getAngle());
    		}
    
    		t.consume();
    	}
    
    	@FXML
    	public void onSwipe(SwipeEvent t) {
    		if (selectedGesture == GestureSelection.SWIPE &&
    		  currentSelection != null) {
    			Node selNode = currentSelection.getCorrespondingNode();
    			TranslateTransition transition = new 
    			  TranslateTransition(Duration.millis(1000), selNode);
    			if (t.getEventType() == SwipeEvent.SWIPE_DOWN) {
    				transition.setByY(100);
    			} else if (t.getEventType() == SwipeEvent.SWIPE_UP) {
    				transition.setByY(-100);
    			} else if (t.getEventType() == SwipeEvent.SWIPE_LEFT) {
    				transition.setByX(-100);
    			} else if (t.getEventType() == SwipeEvent.SWIPE_RIGHT) {
    				transition.setByX(100);
    			}
    			transition.play();
    
    		}
    		t.consume();
    	}
    }
    
    

    Closing

    JavaFX provides a powerful and flexible, means of adding multi-touch and gesture support to a Java-based application. In addition to media streaming, embedded web content and a hardware-accelerated graphics pipeline, JavaFX includes user interface components and multi-touch events. Developers can create and access these components using JavaFX API calls directly from Java, building their application’s user interface piece by piece. Alternatively, they can define the user interface using the FXML scripting language—by either writing FXML directly or using JavaFX Scene Builder. 

    Whichever approach a developer chooses, connecting the front-end user interface with the back-end logic is a straightforward process. With JavaFX it is easy to design a well-architected, modern user interface with multi-touch support for a Java application.

    You can download the full source for the demo applications and then try it yourself or use it as reference material to create your own Java-based touch application.

     

    Intel, the Intel logo, Atom, and Core are trademarks of Intel Corporation in the U.S. and/or other countries.
    Copyright © 2013 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

     

  • api
  • JavaFx
  • WindowsCodeSample
  • Multi-touch input
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Development Tools
  • Microsoft Windows* 8 Desktop
  • Sensors
  • User Experience and Design
  • Laptop
  • Tablet
  • Desktop
  • License Agreement: 

    Protected Attachments: 

  • URL
  • How to write a 2 in 1aware application

    $
    0
    0

    Dynamically adapting your UI to 2 in 1 configuration changes

    By: Stevan Rogers and Jamel Tayeb

    Downloads


    How to write a 2 in 1 aware application [PDF 596KB]

    Introduction


    With the introduction of 2 in 1 devices, applications need to be able to toggle between “laptop mode” and “tablet mode” to provide the best possible user experience. Touch-optimized applications are much easier to use in “tablet mode” (without a mouse or keyboard), than applications originally written for use in “laptop mode” or “desktop mode” (with a mouse and keyboard). It is critical for applications to know when the device mode has changed and toggle between the two modes dynamically.

    This paper describes a mechanism for detecting mode changes in a Windows* 8 or Windows* 8.1 Desktop application, and provides code examples from a sample application that was enhanced to provide this functionality.

    Basic Concepts


    Desktop Apps vs. Touch-Optimized Apps

    Most people are familiar with Desktop mode applications. Windows* XP and Windows* 7 applications are the examples. These types of apps commonly use a mouse and a keyboard to input data, and often have very small icons to click on, menus that contain many items, sub-menus, etc. These items are usually too small and close together to be selected effectively using a touch interface.

    Touch-optimized applications are developed with the touch interface in mind from the start. The icons are normally larger, and the number of small items are kept to a minimum. These optimizations to the user interface make using touch-based devices much easier. With these UI elements correctly sized, you should extend this attention to the usability of objects the application is handling. Therefore, graphic objects representing these items should also be adapted dynamically.

    The original MTI application

    The MTI (MultiTouchInterface) sample application was originally written as part of the Intel® Energy Checker SDK (see Additional Resources) to demonstrate (among many other things) how the ambient light sensors can be used to change the application interface.

    At its core, the MTI sample application allows the user to draw and manipulate a Bézier curve. The user simply defines in order, the first anchor point, the first control point, the second anchor point, and finally the second and last control point.

    Figure 1 shows an example of a Bézier curve. Note that the size and color of each graphic element are designed to allow quick recognition—even via a computer vision system if required—and easy manipulation using touch.

    • Anchor points are square and tan.
    • Control points are round and red.
    • The segment joining an anchor point to its control point is blue.
    • The Bézier curve is black.


    Figure 1. Select control and anchor points to draw Bezier curve.

    Figure 2, Figure 3, Figure 4, and Figure 5 show the key interactions the user can have with the Bézier curve. An extra touch to the screen allows redrawing a new curve.


    Figure 2. A green vector shows the displacement of the control point.


    Figure 3. A grey arc shows the rotation of the Bezier curve.


    Figure 4. A green vector shows the change of the Bezier curve placement onscreen.


    Figure 5. Scale the Bezier curve with two fingers.

    Support for Ambient Light Sensors (ALS) was added to the MTI sample application. Once the level of light is determined, the display dynamically changes to make it easier for the user to see and use the application in varying light situations. Microsoft recommends increasing the size of UI objects and color contrast as illumination increases.

    MTI changed the interface in numerous stages, according to the light level. In a bright light situation, the MTI application changes the display to “high contrast” mode, increasing the size of the anchor and control points and fading the colors progressively to black and white. In a lower light situation, the application displays a more colorful (less contrasted) interface, with smaller anchor and control points.

    Indeed, anyone who has used a device with an LCD screen, even with backlight, knows it may be difficult to read the screen on a sunny day. Figures 6 and Figure 7 show the issue clearly.


    Figure 6. Sample with low ALS setting in full sunlight (control points indicated on right).


    Figure 7. Sample with full ALS setting in full sunlight.

    In our case, we decided to re-use the size change mechanism that we implemented for the ALS support. We are using only the two extremes of the display changes for the UI objects’ size that were introduced for the ALS support. We do this, simply by setting the UI objects’ size to the minimum when the system is in non-tablet mode, and to the maximum when it is in tablet mode (by convention, the unknown mode maps to the non-tablet mode).

    Modified MTI (aka: Bezier_MTI)

    Using the two extremes of the display shown above, the original MTI source code was modified to add new capabilities to toggle between the two contrast extremes based on a certain event. The event used to toggle between the two contrast extremes is the switch between tablet mode and laptop mode of a 2 in 1 device. Switches in the hardware signal the device configuration change to the software (Figure 8).


    Figure 8. Notification process. All elements must be present.

    Upon starting the Bezier_MTI application, the initial status of the device is unknown (Figure 9). This is because the output of the API used to retrieve the configuration, is valid only when a switch notification has been received. At any other time, the output of the API is undefined.

    Note that only the first notification is required since an application can memorize that it received a notification using a registry value. With this memorization mechanism, at next start, the application could detect its state using the API. If the application knows that it has received a notification in the past on this platform, then it can use the GetSystemMetrics function to detect its initial state. Such mechanism is not implemented in this sample.


    Figure 9. State machine.

    When the mode of the device is changed, Windows sends a WM_SETTINGCHANGE message to the top level window only, with “ConvertibleSlateMode” in the LPARAM parameter. Bezier_MTI detects the configuration change notification from the OS via this message.

    If LPARAM points to a string equal to “ConvertibleSlateMode”, then the app should call GetSystemMetrics(SM_CONVERTIBLESLATEMODE). A “0” returned means it is in tablet mode. A “1” returned means it is in non-tablet mode (Figure 10).

    	...
    	
    	//---------------------------------------------------------------------
    	// Process system setting update.
    	//---------------------------------------------------------------------
    	case WM_SETTINGCHANGE:
    	
        //-----------------------------------------------------------------
    	   // Check slate status.
    	   //-----------------------------------------------------------------
    	   if(
    	      ((TCHAR *)lparam != NULL) &&
    	      (
    	         _tcsnccmp(
    	            (TCHAR *)lparam,
    	            CONVERTIBLE_SLATE_MODE_STRING,
    	            _tcslen(CONVERTIBLE_SLATE_MODE_STRING)
    	         ) == 0
    	       )
    	   ) {
    	
    	      //-------------------------------------------------------------
    	      // Note:
    	      //    SM_CONVERTIBLESLATEMODE reflects the state of the 
    	      // laptop or slate mode. When this system metric changes,
    	      // the system sends a broadcast message via WM_SETTING...
    	      // CHANGE with "ConvertibleSlateMode" in the LPARAM.
    	      // Source: MSDN.
    	      //-------------------------------------------------------------
    	      ret = GetSystemMetrics(SM_CONVERTIBLESLATEMODE);
    	      if(ret == 0) {
    	         data._2_in_1_data.device_configuration = 
    	            DEVICE_CONFIGURATION_TABLET
    	         ;
    	      } else {
    	         data._2_in_1_data.device_configuration = 
    	            DEVICE_CONFIGURATION_NON_TABLET
    	         ;
    	      }
    	...

    Figure 10. Code example for detecting device mode change.

    As good practice, Bezier_MTI includes an override button to manually set the device mode. The button is displayed as a Question Mark (Figure 11) at application startup; then changes to a Phone icon (Figure 12) or a Desktop icon (Figure 13) depending on the device mode at the time. The user is able to touch the icon to manually override the detected display mode. The application display changes according to the mode selected/detected. Note that in this sample, the mode annunciator is conveniently used as a manual override button.


    Figure 11. Device status unknown.

    A phone icon is displayed in tablet mode.


    Figure 12. Note the larger control points.

    A desktop icon is displayed in non-tablet mode.


    Figure 13. Note the smaller control points.

    How do I notify my application of a device configuration change?

    Most of the changes in this sample are graphics related. An adaptive UI should also change the nature and the number of the functions exposed to the user (this is not covered in this sample).

    For the graphics, you should disassociate the graphics rendering code from the management code. Here, the drawing of the Bezier and other UI elements are separated from the geometry data computation.

    In the graphics rendering code, you should avoid using static GDI objects. For example, the pens and brushes should be re-created each time a new drawing is performed, so the parameters can be adapted to the current status, or more generally to any sensor information. If no changes occur, there is no need to re-create the objects.

    This way, as in the sample, the size of the UI elements adapt automatically to the device configuration readings. This not only impacts the color, but also the objects’ size. Note that the system display’s DPI (dots per inch) should be taken in account during the design of this feature. Indeed, small form factor devices have high DPI. This is not a new consideration, but it becomes more important as device display DPI is increasing.

    In our case, we decided to re-use the size change mechanism that we implemented for the ALS support (Figure 14). We do this simply by setting the UI objects’ size to the minimum when the system is in non-tablet mode and to the maximum when it is in tablet mode (by convention, the unknown mode maps to the non-tablet mode).

    	...
    	ret = GetSystemMetrics(SM_CONVERTIBLESLATEMODE);
    	   if(ret == 0) {
    	      data._2_in_1_data.device_configuration = 
    	      DEVICE_CONFIGURATION_TABLET
    	      ;
    	         //---------------------------------------------------------
    	         shared_data.lux = MAX_LUX_VALUE;
    	         shared_data.light_coefficient = NORMALIZE_LUX(shared_data.lux);
    	
    	   } else {
    	         data._2_in_1_data.device_configuration = 
    	            DEVICE_CONFIGURATION_NON_TABLET
    	         ;
    	         //---------------------------------------------------------
    	      shared_data.lux = MIN_LUX_VALUE;
    	      shared_data.light_coefficient = NORMALIZE_LUX(shared_data.lux);
    	      }
    	...

    Figure 14. Code example for changing the UI.

    The following code (Figure 15) shows how a set of macros makes this automatic. These macros are then used in the sample’s drawing functions.

    	...
    	   #define MTI_SAMPLE_ADAPT_TO_LIGHT(v) 
    	    ((v) + ((int)(shared_data.light_coefficient * (double)(v))))
    	
    	   #ifdef __MTI_SAMPLE_LINEAR_COLOR_SCALE__
    	   #define MTI_SAMPLE_LIGHT_COLOR_COEFFICIENT 
    	      (1.0 - shared_data.light_coefficient)
    	   #else // __MTI_SAMPLE_LINEAR_COLOR_SCALE__
    	      #define MTI_SAMPLE_LIGHT_COLOR_COEFFICIENT 
    	         (log10(MAX_LUX_VALUE - shared_data.lux))
    	   #endif // __MTI_SAMPLE_LINEAR_COLOR_SCALE__
    	
    	   #define MTI_SAMPLE_ADAPT_RGB_TO_LIGHT(r, g, b) 
    	   RGB( 
    	    (int)(MTI_SAMPLE_LIGHT_COLOR_COEFFICIENT * ((double)(r))), 
    	    (int)(MTI_SAMPLE_LIGHT_COLOR_COEFFICIENT * ((double)(g))), 
    	    (int)(MTI_SAMPLE_LIGHT_COLOR_COEFFICIENT * ((double)(b))) 
    	...

    Figure 15. Macro example.

    Conclusion


    Windows 8 and Windows 8.1 user interface allows developers to customize the user experience for 2 in 1 devices. The device usage mode change can be detected, and the application interface changed dynamically, resulting in a better user experience for the user.

    About the Authors


    Stevan Rogers has been with Intel for over 20 years. He specializes in systems configuration and lab management and develops marketing materials for mobile devices using Line Of Business applications.

    Jamel Tayeb is the architect for the Intel® Energy Checker SDK. Jamel Tayeb is a software engineer in Intel's Software and Services Group. He has held a variety of engineering, marketing and PR roles over his 10 years at Intel. Jamel has been worked with enterprise and telecommunications hardware and software companies in optimizing and porting applications for/to Intel platforms, including Itanium and Xeon processors. Most recently, Jamel has been involved with several energy-efficiency projects at Intel. Prior to reaching Intel, Jamel was a professional journalist. Jamel earned a PhD in Computer Science from Université de Valenciennes, a Post-graduate diploma in Artificial Intelligence from Université Paris 8, and a Professional Journalist Diploma from CFPJ (Centre de formation et de perfectionnement des journalistes – Paris Ecole du Louvre).

    Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and other countries.
    *Other names and brands may be claimed as the property of others
    Copyright© 2013 Intel Corporation. All rights reserved.

  • Microsoft Windows* 8.1
  • Multi-touch Interface
  • applications
  • 2 in 1
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Beginner
  • Sensors
  • User Experience and Design
  • Laptop
  • Tablet
  • Desktop
  • URL
  • Оптимизация приложений Windows* 8 для режима Connected Standby

    $
    0
    0

    Загрузить статью

    Optimizing Windows* 8 Applications for Connected Standby [Eng., PDF 1.4MB]

    Аннотация

    В данной статье описывается проверка и анализ поведения приложений Windows 8 * в режиме Connected Standby. Поддержка этого режима является одним из требований Microsoft * WHQL для Windows 8 [1]. Поясняется, как выявить приложения, расходующие слишком много электроэнергии в режиме Connected Standby, и как устранить эту проблему. Этот документ предназначен для разработчиков программного обеспечения, изготовителей оборудования и технических пользователей.

    Введение

    Режим Connected Standby позволяет системе получать обновления и оставаться на связи при наличии любых сетевых подключений. Этот режим аналогичен принципу работы сотовых телефонов: телефон остается подключенным к сотовой сети, даже если его экран выключен. Подобным образом и приложения Windows 8, поддерживающие режим Connected Standby, запускаются в обновленном состоянии сразу после перехода в активный режим. Дополнительные сведения о режиме Connected Standby на ПК см. на сайте Microsoft [1].

    При выключении экрана на всех системах, поддерживающих режим Connected Standby, все запущенные программы (включая и приложения, и ОС) начинают работать в соответствии с новым набором ограничений. Средство Windows Desktop Activity Moderator (DAM) отключает выполнение устаревших приложений, действуя аналогично режиму сна. Для этого выполняется приостановка всех приложений пользовательского сеанса и регулированиевсех сторонних служб. Эти меры позволяют добиться предсказуемого расхода электроэнергии на время бездействия. Это дает возможность системам, поддерживающим режим Connected Standby, свести к минимуму использование ресурсов и долго работать от аккумулятора. При этом приложения Windows 8 смогут поддерживать нужные возможности подключения. Кроме того, при повышении чувствительности аппаратных состояний питания программные службы должны правильно работать, чтобы не пробуждать систему без необходимости, так как при этом будет израсходована дополнительная энергия.

    В оставшейся данной статьи описываются средства и методики для понимания поведения системы в режиме Connected Standby. Затем приведены два примера приложений, работу которых в этом режиме можно оптимизировать.

    Средства

    Мы использовали два широкодоступных средства разработки, чтобы понять поведение приложений в режиме ожидания с подключением.

    Windows PowerCfg

    Windows PowerCfg — это программа командной строки для управления параметрами электропитания. Эта программа применяет трассировку событий Windows (ETW) для создания профилей систем. Пользователи могут запускать Windows Powercfg для просмотра и изменения планов и параметров электропитания, например времени сна, таймера пробуждения и схем электропитания. Если запустить Powercfg с параметром -energy, будет проведен анализ основных проблем энергоэффективности и работы от аккумулятора: изменения параметров таймеров платформы, изменения таймеров процессами и библиотеками приложений, использование ЦП каждым процессом. При использовании этого параметра также будет проверено, поддерживает ли система режим Connected Standby; будут перечислены параметры управления энергопотреблением оборудования и ОС. Для запуска Windows Powercfg требуются права администратора.

    Два параметра командной строки используются для проверки и получения информации о поведении в режиме Connected Standby:

    Powercfg –a: Этот параметр выводит все доступные состояния сна системы. Чтобы выполнить эту команду, откройте в Windows командную строку. В командной строке введите: % powercfg –a

    Система, поддерживающая режим Connected Standby, сообщит о поддерживаемых состояниях сна, одним из которых будет этот режим. На рис. 1 показаны выходные данные команды powercfg –a для системы, поддерживающей режим Connected Standby.



    Рисунок 1. Powercfg -a output

    Powercfg -batteryreport

    Параметр -batteryreport предоставляет информацию о поддержке режима Connected Standby и другие связанные данные. Создается отчет (в формате HTML) о статистике работы системы от аккумулятора путем сбора профиля на основе постоянно запущенной встроенной трассировки системы. Этот отчет содержит сводные данные об установленном аккумуляторе, версии BIOS, поддержке режима Connected Standby, а также о сроке работы от аккумулятора на основе фактического использования системы. На рис. 2 показан пример выходных данных команды с параметром -batteryreport, если ПК поддерживает режим Connected Standby.



    Рисунок 2. Отчет о состоянии аккумулятора с поддержкой режима Connected Standby

    В отчете также указывается использование аккумулятора в активном состоянии, в приостановленном состоянии и в режиме Connected Standby, как показано на рис. 3.



    Рисунок 3. Использование аккумулятора в различных состояниях

    Дополнительные сведения о решении Windows Powercfg см. на Microsoft website [2].

    Microsoft Windows Performance Analyzer

    Средство Windows Performance Analyzer (WPA), также известное под названием xperf, представляет собой набор инструментов мониторинга для составления подробных профилей производительности и электропитания Microsoft Windows и приложений. WPA удобно использовать для устранения проблем с утечкой энергии.

    Перед переходом к примерам давайте разберемся с терминологией WPA. Вот определения основных терминов и названий столбцов WPA, взятые из документации System Internals, находящейся по адресу [3]:

    • Ready Thread:поток в готовом состоянии, ожидающий выполнения или готовый к переключению после завершения ожидания. При поиске потоков для выполнения диспетчер рассматривает только пул потоков в готовом состоянии.
    • Standby: поток в ждущем режиме, выбранный для последующего запуска на определенном процессоре. При возникновении подходящих условий диспетчер выполняет переключение контекста к этому потоку. Только один поток может быть в ждущем режиме для каждого процессора в системе. Обратите внимание, что поток может быть выгружен из ждущего состояния еще перед выполнением (например, если поток с более высоким приоритетом запускается перед началом выполнения ждущего потока).
    • Waiting: поток может перейти в состояние ожидания несколькими способами: поток может дожидаться, пока будет выполнена синхронизация объекта; операционная система может ждать от имени потока (например, при операции ввода-вывода); подсистема среды может дать команду потоку на приостановку. Когда состояние ожидания потока завершается, то, в зависимости от приоритета, поток либо начинает выполняться сразу же, либо возвращается в состояние готовности.
    • CPU Precise:график CPU Usage (Precise) содержит информацию, связанную с событиями переключения контекста. Каждая строка соответствует набору данных, связанному с одним переключателем контекста при начале выполнения потока.
    • % CPU Usage: использование ЦП новым потоком после его включения выражается в виде процента общего времени ЦП в отображаемом диапазоне времени.
    • Count: количество переключателей контекста в строке (всегда 1 для одиночных строк).
    • NewThreadId: идентификатор нового потока.
    • NewThreadStack: стек нового потока после его подключения.
    • ReadyingProcess: процесс, владеющий готовым потоком.
    • SwitchInTime(s): время переключения в новый поток.
    • LastSwitchOutTime (s): время последнего переключения из нового потока.
    • TimeSinceLast (s): SwitchInTime(s) - LastSwitchOutTime (s)

    На рис. 4 показаны имена столбцов в пользовательском интерфейсе WPA.



    Рисунок 4. Общий вид WPA

    Generic Events:предоставленные пользователем события заполняются для анализа данных трассировки ядра.

    • OneShotTimer : может входить в состав всегда включенного таймера в режиме ожидания с подключением. Операционная система запускает OneShotTimer через каждые 30 секунд. Приложения могут создавать таймер, вызывая SetTimer или SetEvent.
    • PeriodicTimer: эти таймеры срабатывают после истечения указанного времени, а затем сбрасываются в исходное состояние.

    Периодические таймеры работают на уровне приложений и могут запускать переходы режима ядра. Однократные таймеры работают на уровне операционной системы в режиме Connected Standby.

    Разработчики должны выполнять не менее двух тестов: базовый (без приложений) и целевой (с установленным приложением), чтобы выделить влияние приложения.

    Сбор данных трассировки

    • Выполните команду powercfg.exe –a, чтобы убедиться, что система поддерживает режим Connected Standby.
    • Установите Windows Performance Analyzer из Windows ADK [4].
    • Запустите сбор данных трассировки, создав пакетный файл со следующей командной строкой:
      • xperf -on PROC_THREAD+LOADER+INTERRUPT+DPC+CSWITCH+IDLE_STATES+POWER+TIMER+CLOCKINT+IPI+DISPATCHER+DISK_IO -stackwalk TimerSetPeriodic+TimerSetOneShot -clocktype perfcounter -buffering -buffersize 1024 -MinBuffers 128 -MaxBuffers 128
    • PROC_THREAD+LOADER: предоставляет информацию о прерываниях устройства и таймере.
    • INTERRUPT: используется для анализа событий прерывания. Предоставляет информацию, связанную с аппаратными прерываниями.
    • DPC: используется для анализа источников прерывания. Предоставляет информацию, связанную с журналами DPC.
    • CSWITCH: используется для анализа источников прерывания. Предоставляет информацию, связанную с переключателями контекста.
    • IPI: предоставляет информацию, связанную с прерываниями между процессорами.
    • TimerSetPeriodic+TimerSetOneShot: Required требуемые стеки для анализа таймера и анализа прерываний устройства.
    • Дайте системе войти в состояние Connected Standby (например, нажав кнопку питания)
      • Подождите, пока средство xperf соберет данные трассировки в течение не менее 4 часов. При большей длительности трассировки можно лучше изучить действия программного обеспечения в режиме Connected Standby.
      • Пробудите систему из состояния Connected Standby (например, нажав кнопку питания).
    • Остановите трассировку.

    xperf -flush xperf -stop xperf -merge \kernel.etl MyTrace.etl

    После завершения трассировки в текущей папке будет создан файл Mytrace.etl.

    Постобработка трассировки

    Выполните следующую команду для постобработки файла трассировки с информацией о пробуждении:

    xperf -symbols -i mytrace1.etl -o cleanCS_diag.csv -a energydiag –verbose

    Можно обработать определенную область трассировки, указав диапазон

    Xperf –symbols –I mytrace1.etl –o cleanCS_diag.csv –a energygdiag –range T1 T2

    Например: xperf -symbols -i -o EnergyDiag.csv -a energydiag -verbose -range 1000000 15000000000

    На рис. 5 показаны файлы, созданные после постобработки.

    cleanCS_diag: содержит все события и действие пробуждения системы.

    MyTrace1: содержит необработанную информацию о трассировке.



    Рисунок 5. Пример выходных данных трассировки

    cleanCS_diag:

    При постобработке собранных данных трассировки формируется журнал, содержащий количество прерываний устройств, такт таймеров и результаты для каждого ЦП. Также указывается частота устройства и действия пробуждения таймера. Постобработку также можно выполнить для данных трассировки, полученных в режиме простоя и при активной работе. Постобработка сценария помогает определить влияние работы программного обеспечения на расход электроэнергии.



    Рисунок 6. Выходные данные сценария постобработки

    Общее число прерываний устройства (рис. 6) является суммой количества прерываний всех модулей устройства в собранных данных трассировки. Суммарное истечение таймеров — подмножество прерываний, вызванных срабатываниями таймеров. В режиме ожидания с подключением истечение таймеров включает системные таймеры, события, а также oneshottimer и periodictimer, связанные с регулировкой.

    Теперь следует понять, чем занимается система в режиме Connected Standby. Можно прокрутить отчет вниз до схемы Busy Enter/Exit Time, на которой нужно найти группу All CPUs. Значение Busy Percent позволяет точно оценить деятельность системы в режиме Connected Standby. Это дает возможность понять, насколько занята система. Чем выше показатель занятости по отношению к базовому значению, тем сильнее влияние на потребление электроэнергии. На рис. 7 показана трассировка базового значения занятости без тестовых приложений. На рис. 8 показана трассировка с несколькими запущенными приложениями и фоновой службой. Сравнение рисунков 7 и 8 показывает повышение занятости в 150 раз из-за пробуждений, вызванных приложениями, и фоновой службы.



    Рисунок 7. Базовые выходные данные



    Рисунок 8. Выходные данные трассировки с установленными приложениями

    Анализ необработанных данных трассировки:

    Также можно изучить файл трассировки непосредственно в Windows Performance Analyzer. На рис. 9 показано средство Graph Explorer в составе WPA.



    Рисунок 9. Окно WPA после открытия файла трассировки

    На рис. 10 показаны данные вычислений на вкладке анализа. Можно приблизить полоски действий, чтобы увидеть действия пробуждения процессов и системы. На рис. 10 показано, каким образом таймер OneShotTimer системы согласуется с действиями процессов.



    Рисунок 10. Общее представление системы в режиме Connected Standby

    Чтобы проверить вызовы OneShotTimer из системы, перетащите события из группы действий системы в окно анализа. Загрузить символы можно с сервера Майкрософт или из папки символов приложения, выбрав команду Load Symbols в меню Trace. На рис. 11 показан этот элемент в меню Trace.



    Рисунок 11. Загрузка символов

    В WPA можно включить график и таблицу для анализа стека и декодирования процессов и потоков, щелкнув первый элемент в правом верхнем углу графика WPA, как показано на рис. 12.



    Рисунок 12. График и таблица WPA

    Теперь следует включить столбцы, как показано на рис. 12, чтобы проанализировать работу стека OneShotTimer.

    Расположите столбцы таблицы анализа, чтобы найти действия пробуждения, запущенные системой или службами приложений. На рис. 13 показан процесс System с идентификатором потока 68, запускающий таймер OneShotTimer 36 раз в течение отображаемого отрезка времени. Пробуждение запускается системным процессом через каждые 30 секунд.



    Рисунок 13. Работа стека таймера OneShotTimer в WPA

    «Хорошее» и «плохое» поведение:

    При оптимизации приложений для снижения потребления электроэнергии важно понимать разницу между «хорошим» и «плохим» поведением. Такие действия как доступ к хранилищу или доступ обновлений программного обеспечения к сети могут пробудить систему, если будут запущены вне системного пробуждения.

    Хорошее поведение: служба приложения выполняется внутри процесса System. Т. е. служба приложения переходит в состояния сна до того, как процесс System перейдет в состояние сна. Такой подход помогает выполнить требование WHQL для режима Connected Standby в Microsoft Windows 8: в течение 16 часов ожидания с подключением может быть израсходовано не более 5 % заряда аккумулятора.

    Плохое поведение: приложение работает независимо от процесса System или входит в состояние сна после того, как процесс System переходит в состояние сна. Несогласованные пробуждения могут вызвать излишний расход электроэнергии в режиме Connected Standby, из-за чего выполнить требование Microsoft WHQL будет невозможно.

    На рис. 14 показано «хорошее» и «плохое» поведение в режиме ожидания с подключением.



    Рисунок 14. «Хорошее» и «плохое» поведение в режиме Connected Standby

    Пример 1. Доступ к хранилищу.

    В программных службах, например в антивирусах и службах обновления программного обеспечения, широко распространен доступ к локальному хранилищу. Когда эти службы запущены в режиме Connected Standby, доступ к локальному хранилищу должен быть отложен до пробуждения процесса System. На рис. 15 показан сценарий доступа к хранилищу в течение примерно 65 секунд в режиме Connected Standby. Приложение пробуждается, когда процесс System (выделен оранжевым), переходит в активное состояние сна. ProcessX.exe запускает доступ к хранилищу в System32, из-за чего система не может перейти в режим Connected Standby. Можно оптимизировать приложение, исключив длительный доступ к хранилищу. Если приложению нужен доступ к хранилищу в режиме ожидания с подключением, можно объединить работу приложения с работой системы и перейти в состояние ожидания путем широковещательного уведомления об изменении состояния электропитания.



    Рисунок 15. Доступ службы приложения к хранилищу в режиме Connected Standby

    После этого изменения процесс доступа к хранилищу и процесс System будут объединены в режиме Connected Standby (см. рис. 16). Это пример хорошего поведения: приложение не влияет на энергопотребление системы.



    Рисунок 16. Оптимизированный доступ к хранилищу в режиме Connected Standby

    Пример 2. Пробуждение потоков приложения.

    Оптимизация пробуждения приложения, вызванного ОС, — непростая задача. Необходимо понимать события ЦП Precise и Generic, чтобы узнать, происходит ли OneShotTimer при пробуждении процесса System. На рис. 16 показано пробуждение потоком приложения, когда процесс System находится в состоянии сна. Это образец неправильного написания служб процессов, которые без необходимости поддерживают систему в пробужденном состоянии. ProcessX.exe (ID: 2440) создает несколько потоков. В таблице на рис. 16 видны два потока, не согласованных с процессом System. Используя общую таблицу событий, можно сопоставить идентификатор потока с setTimer и прерываниями часов. Как показано на рис. 16, существуют задачи потока Timer Set, которые следует рассмотреть (идентификаторы потоков 3432 и 1824). Теперь следует сопоставить идентификатор потока, полученный на предыдущем шаге (Thread ID 3432 и Thread ID 1824) с таблицей CPU Usage (Precise), чтобы найти деятельность, связанную с этими потоками. Деятельность может быть связана с Timer Set, с расписанием потоков или с действиями ввода-вывода. Для наглядного отображения можно построить несколько графиков в одном представлении.



    Рисунок 17. Потоки приложений поддерживают систему активной в состоянии сна

    Функцию SetTimer можно использовать для изменения таймера потока в приложении.

    UINT_PTR WINAPI SetTimer(
      _In_opt_  HWND hWnd,
      _In_      UINT_PTR nIDEvent,
      _In_      UINT uElapse,
      _In_opt_  TIMERPROC lpTimerFunc
    );
    

    Окно приложения (HWND) используется для обработки уведомлений посредством процедуры window, которую следует вызвать через количество микросекунд, заданное значением uElapse, даже после перехода процесса System в режим Connected Standby.

    Если в приложении есть окно (HWND) и нужно обрабатывать эти уведомления посредством процедуры window, чтобы это исправить, вызовите RegisterSuspendResumeNotification для регистрации этих сообщений (или UnregisterSuspendResumeNotification для отмены регистрации). Можно использовать DEVICE_NOTIFY_WINDOW_HANDLE в параметре Flags и передавать значение окна HWND в параметре Recipient. Получено сообщение WM_POWERBROADCAST.

    Если приложение не имеет обработчика HWND или если нужен прямой обратный вызов, вызовите PowerRegisterSuspendResumeNotification для регистрации на эти сообщения (или PowerUnregisterSuspendResumeNotification для отмены регистрации). Можно использовать DEVICE_NOTIFY_WINDOW_HANDLE в параметре Flags и передавать значение типа PDEVICE_NOTIFY_SUBSCRIBE_PARAMETERS в параметре Recipient.

    Заключение

    Реализация режима ожидания с подключением в приложениях крайне важна для продления времени работы от аккумулятора. Системы, поддерживающие режим Connected Standby, должны отвечать требованиям сертификации оборудования Windows Hardware Certification (WHCK) в отношении потребления электроэнергии. Согласно этим требованиям, все системы в режиме Connected Standby должны израсходовать не более 5 % заряда аккумулятора в течение 16-часового периода бездействия в фабричной конфигурации по умолчанию. Сертификационный тест находится на сайте Microsoft WHCK.

    Об авторе

    Мануж Сабарвал (Manuj Sabharwal) работает инженером по программному обеспечению в отделе Software Solutions Group корпорации Intel. Мануж изучает возможные способы повышения эффективности расхода электроэнергии программами в активном состоянии и в состоянии простоя. Он обладает высокой научно-технической квалификацией в области эффективности расхода электроэнергии; им разработан ряд учебных и технических справочников, применяющихся в данной отрасли. Также он работает над поддержкой клиентских платформ путем оптимизации программного обеспечения.

    Справочные материалы

    [1] Microsoft WHCK: http://msdn.microsoft.com/en-US/library/windows/hardware/jj128256

    [2] PowerCfg: http://technet.microsoft.com/en-us/library/cc748940(WS.10).aspx

    [3] Windows Internals: http://technet.microsoft.com/en-us/sysinternals/bb963901.aspx

    [4] Windows Assessment Toolkit: http://www.microsoft.com/en-us/download/details.aspx?id=30652

    *Другие наименования и торговые марки могут быть собственностью третьих лиц.

    Copyright ©2013 Intel Corporation.

  • ultrabook
  • applications
  • sensors
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Optimization
  • Sensors
  • Laptop
  • Tablet
  • Desktop
  • URL
  • Developing Windows* 8 Desktop Touch Apps with Windows* Presentation Foundation

    $
    0
    0

    By Bruno Sonnino

    Downloads


    Developing Windows* 8 Desktop Touch Apps with Windows* Presentation Foundation [PDF 733KB]

    The launch of the Windows 8 operating system made touch a first-class citizen, and device manufacturers started to introduce new devices with touch-enabled displays. Such devices are now becoming more common and less expensive. In addition, manufacturers have introduced a new category of devices, 2 in 1 Ultrabook™ devices, which are lightweight and have enough power to be used as a traditional notebook with a keyboard and mouse or as a tablet using touch or a stylus.

    These machines open new opportunities for developers. You can enable your apps for touch and make them easier to use and more friendly. When you create Windows Store apps for Windows 8, this functionality comes free. But what about desktop apps, the bread and butter of non-Web developers? You haven’t been forgotten.

    In fact, desktop apps are already touch enabled, and Windows Presentation Foundation (WPF) has had built-in support for touch since version 4.0, as you’ll see in this article.

    Design for Touch Apps


    Microsoft has categorized touch for desktop apps according to three scenarios: good, better, and best.

    Good Apps

    With Windows 8, every desktop app has built-in touch support. All touch input is translated into mouse clicks, and you don’t need to change your app for it. After you have created a desktop app, it will work with touch. Users can click buttons, select list items or text boxes with a finger or stylus, and can even use a virtual keyboard to input text if no physical keyboard is available. You can see this behavior with File Manager, the Calculator app, Microsoft Notepad, or any of your desktop apps.

    Better Apps

    The built-in touch behavior in Windows 8 is good and doesn’t need much effort to develop, but it’s not enough. You can go a step further by adding gesture support for your app. Gestures are one- or two-finger actions that perform some predefined action—for example, tap to select, drag to move, flick to select the next or previous item, or pinch to zoom and rotate. See Figure 1.


    Figure 1. Common gestures

    The operating system translates these gestures into the WM_GESTURE message. You can develop a program to handle this message and process the gestures, which will give your apps a bonus because you can support actions exclusive to touch-enabled devices.

    Best Apps

    At the pinnacle of Microsoft’s rating scheme, you can develop the best app for touch by designing it to support full touch functionality. Now, you may ask, “Why should I design for touch? Don’t my apps work well enough for touch?” The answer, most of the time, is no.

    Apps designed for touch are different from conventional apps in several ways:

    • The finger isn’t a mouse. It does not have the precision a mouse has and so the UI requires some redesign. Buttons, check boxes, and list items should be large enough that users can touch inside them with minimal error.
    • Touch apps may not have a keyboard available. Although users can use a virtual keyboard, it’s not the same as the real thing. Rethinking the user interface (UI) to minimize keyboard input for touch apps can make the apps easier to use.
    • Many simultaneous contacts can occur at the same time. With a mouse, the program has a single input point, but with touch apps, there may be more than one input. Depending on the device, the program could accept 40 or 50 simultaneous inputs (imagine a touch table with five or six players).
    • Users can run the app in different orientations. Although traditional apps run in landscape, this is not true with touch apps. Users can rotate devices, and in some cases, there may be more than one user, such as one on either side of the device.
    • Users don’t have easy access to the whole device area. If a user is holding a tablet in his or her hands, it may be difficult to access the center of the device, because the user will have to hold it with one hand while touching it with the other one.

    A “best” touch app must handle all of these issues and not abandon traditional data-entry methods with mouse and keyboard, or the app won’t work on devices that don’t have touch.

    Touch Support in WPF


    With WPF, you can add full touch support for your apps. You can add gestures or even full touch support with manipulations and inertia.

    Adding Gestures to Your App

    One way to add gestures to your apps is to process the WM_GESTURE message. The MTGestures sample in the Windows* 7 software development kit (SDK) shows how to do it. Just install the Windows 7 SDK and go to the samples directory (for the link, see the “For More Information” section at the end). Listing 1 shows the code.

    Listing 1. Message processing in the MTGesture SDK sample

    [PermissionSet(SecurityAction.Demand, Name = "FullTrust")]
    protected override void WndProc(ref Message m)
    {
        bool handled;
        handled = false;
    
        switch (m.Msg)
        {
            case WM_GESTURENOTIFY:
                {
                    // This is the right place to define the list of gestures
                    // that this application will support. By populating 
                    // GESTURECONFIG structure and calling SetGestureConfig 
                    // function. We can choose gestures that we want to 
                    // handle in our application. In this app we decide to 
                    // handle all gestures.
                    GESTURECONFIG gc = new GESTURECONFIG();
                    gc.dwID = 0;                // gesture ID
                    gc.dwWant = GC_ALLGESTURES; // settings related to gesture
                                                // ID that are to be turned on
                    gc.dwBlock = 0; // settings related to gesture ID that are
                                    // to be     
    
                    // We must p/invoke into user32 [winuser.h]
                    bool bResult = SetGestureConfig(
                        Handle, // window for which configuration is specified
                        0,      // reserved, must be 0
                        1,      // count of GESTURECONFIG structures
                        ref gc, // array of GESTURECONFIG structures, dwIDs 
                                // will be processed in the order specified 
                                // and repeated occurrences will overwrite 
                                // previous ones
                        _gestureConfigSize // sizeof(GESTURECONFIG)
                    );
    
                    if (!bResult)
                    {
                       throw new Exception("Error in execution of SetGestureConfig");
                    }
                }
                handled = true;
                break;
    
            case WM_GESTURE:
                // The gesture processing code is implemented in 
                // the DecodeGesture method
                handled = DecodeGesture(ref m);
                break;
    
            default:
                handled = false;
                break;
        }
    
        // Filter message back up to parents.
        base.WndProc(ref m);
    
        if (handled)
        {
            // Acknowledge event if handled.
            try
            {
                m.Result = new System.IntPtr(1);
            }
            catch (Exception excep)
            {
                Debug.Print("Could not allocate result ptr");
                Debug.Print(excep.ToString()); 
            }
        }
    }

    You must override the window procedure, configure what kind of gestures you want when you receive the WM_GESTURENOTIFY message, and process the WM_GESTURE message.

    As you can see, adding gestures to a C# app isn’t a simple task. Fortunately, there are better ways to do it in WPF. WPF has support for the stylus and raises the StylusSystemGesture event when the system detects a touch gesture. Let’s create a photo album that shows all photos in the Pictures folder and allows us to move between images by flicking to the right or left.

    Create a new WPF app and add to the window three columns, two buttons, and an Image control. Listing 2 shows the code.

    Listing 2. XAML markup for the new WPF app

    <Grid>
        <Grid.ColumnDefinitions>
            <ColumnDefinition Width="40" />
            <ColumnDefinition Width="*" />
            <ColumnDefinition Width="40" />
        </Grid.ColumnDefinitions>
        <Button Grid.Column="0" Width="30" Height="30" Content="<" />
        <Button Grid.Column="2" Width="30" Height="30" Content=">" />
        <Image x:Name="MainImage" Grid.Column="1" />
    </Grid>

    Now, create a field named _filesList and another named _currentFile. See Listing 3.

    Listing 3. Creating the _filesList and _currentFile fields

    private List<string> _filesList;
    private int _currentFile;

    In the constructor of the main window, initialize FilesList with the list of files in the My Pictures folder. See Listing 4.

    Listing 4. Main window constructor

    public MainWindow()
    {
        InitializeComponent();
        _filesList = Directory.GetFiles(Environment.GetFolderPath(
            Environment.SpecialFolder.MyPictures)).ToList();
        _currentFile = 0;
        UpdateImage();
    }

    UpdateImage updates the image with the current image, as shown in Listing 5.

    Listing 5. Updating the image

    private void UpdateImage()
    {
        MainImage.Source = new BitmapImage(new Uri(_filesList[_currentFile]));
    }

    Then, you must create two functions to show the next and previous images. Listing 6 shows the code.

    Listing 6. Functions to show the next and previous images

    private void NextFile()
    {
        _currentFile = _currentFile + 1 == _filesList.Count ? 0 : _currentFile + 1;
        UpdateImage();
    }
    
    private void PreviousFile()
    {
        _currentFile = _currentFile == 0 ? _filesList.Count-1 : _currentFile - 1;
        UpdateImage();
    }

    The next step is to create the handlers for the Click event for the two buttons that call these functions.

    In MainWindow.xaml, type the code in Listing 7.

    Listing 7. Declaring the Click event handlers in MainWindow.xaml

    <Button Grid.Column="0" Width="30" Height="30" Content="&lt;" Click="PrevClick"/>
    <Button Grid.Column="2" Width="30" Height="30" Content="&gt;" Click="NextClick"/>

    In MainWindow.xaml.cs, type the code in Listing 8.

    Listing 8. Creating the Click event handlers in MainWindow.xaml.cs

    private void PrevClick(object sender, RoutedEventArgs e)
    {
        PreviousFile();
    }
    
    private void NextClick(object sender, RoutedEventArgs e)
    {
        NextFile();
    }

    When you run the program, you will see that it shows the My Pictures images. Clicking the buttons allows you to cycle through the images. Now, you must add gesture support, which is simple. Just add the handler for the StylusSystemGesture event in the grid:

    Listing 9. Declaring the StylusSystemGesture event handler in MainWindow.xaml

    <Grid Background="Transparent" StylusSystemGesture="GridGesture" />

    Note that I have added a background to the grid. If you don’t do that, the grid won’t receive the stylus events. The code of the handler is shown in Listing 10.

    Listing 10. The grid handler

    private void GridGesture(object sender, StylusSystemGestureEventArgs e)
    {
        if (e.SystemGesture == SystemGesture.Drag)
            NextFile();
    }

    If you are following along with this article and performing the steps, you will notice that there is a SystemGesture.Flick that I didn’t use. This gesture works only in Windows Vista*. Later Windows versions show the Drag gesture. You will also notice that I am not differentiating a forward flick from a backward one (or even horizontal from vertical). That’s because there is no built-in support to do it, but we will take care of that next. Run the program and see that a flick in any direction brings up the next image.

    To handle the direction of the flick, you must check its starting and end points. If the distance is larger in the horizontal direction, treat it as a horizontal flick. The sign of the difference between the end and starting points shows the direction. Declare the handler for the StylusDown event for the grid in the .xaml file, as shown in Listing 11.

    Listing 11. Declaring the StylusDown event for the grid

    <Grid Background="Transparent" 
          StylusSystemGesture="GridGesture"
          StylusDown="GridStylusDown">

    The code for this handler is shown in Listing 12.

    Listing 12. Creating the handler

    private void GridStylusDown(object sender, StylusDownEventArgs e)
    {
        _downPoints = e.GetStylusPoints(MainImage);
    }

    When the stylus is down, we store the contact points in the _downPoints array. You must modify the StylusSystemGesture event to get the direction for the flick. See Listing 13.

    Listing 13. Modifying the StylusSystemGesture event

    private void GridGesture(object sender, StylusSystemGestureEventArgs e)
    {
        if (e.SystemGesture != SystemGesture.Drag)
            return;
        var newPoints = e.GetStylusPoints(MainImage);
        bool isReverse = false;
        if (newPoints.Count > 0 && _downPoints.Count > 0)
        {
          var distX = newPoints[0].X - _downPoints[0].X;
          var distY = newPoints[0].Y - _downPoints[0].Y;
          if (Math.Abs(distX) > Math.Abs(distY))
          {
            isReverse = distX < 0; // Horizontal
          }
          else
          {
            return;  // Vertical
          }
        }
        if (isReverse)
            PreviousFile();
        else
            NextFile();
    }

    When the Drag gesture is detected, the program creates the new points and verifies the largest distance to determine whether it’s horizontal or vertical. If it’s vertical, the program doesn’t do anything. If the distance is negative, then the direction is backwards. That way, the program can determine the kind of flick and its direction, going to the next or the previous image depending on the direction. The app now works for touch and the mouse.

    Adding Touch Manipulation to a WPF App

    Adding gestures to your app is a step in the right direction, but it’s not enough. Users may want to perform complex manipulations, use more than one or two fingers, or want a physical behavior that mimics the real world. For that, WPF offers touch manipulations. Let’s create a WPF touch app to see how it works.

    In Microsoft Visual Studio*, create a new WPF app and change the window width and height to 800 and 600, respectively. Change the root component to a Canvas. You should have code similar to Listing 14 in MainWindow.xaml.

    Listing 14. The new WPF app in Visual Studio

    <Window x:Class="ImageTouch.MainWindow"
            xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
            xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
            Title="MainWindow" Height="600" Width="800">
        <Canvas x:Name=”LayoutRoot”>
            
        </Canvas>
    </Window>

    Go to the Solution Explorer and add an image to the project (right-click the project, click Add/Existing Item, and select an image from your disk). Add an image component to the main Canvas, assigning the Source property to the added image:

    Listing 15. Image added to the main canvas

    <Image x:Name="MainImage" Source="seattle.bmp" Width="400" />

    If you run this program, you will see that it is already touch enabled. You can resize and move the window, and touch input is automatically converted to mouse input. However, that’s not what you want. You want to use touch to move, rotate, and resize the image.

    For that, you must use the IsManipulationEnabled property. When you set this property to true, the control receives touch events. The ManipulationDelta event is fired every time a manipulation in the control occurs. You must handle it and set the new properties of the image. In the .xaml file, set the property IsManipulationEnabled to true and declare a ManipulationDelta event, as shown in Listing 16.

    Listing 16. Enabling touch manipulation

    <Image x:Name="MainImage" Source="seattle.bmp" Width="400" 
           IsManipulationEnabled="True" 
           ManipulationDelta="MainImageManipulationDelta">
        <Image.RenderTransform>
            <MatrixTransform />
        </Image.RenderTransform>
    </Image>

    I have also added a MatrixTransform to the RenderTransform property. You change this transform when the user manipulates the image. The event handler should be similar to Listing 17.

    Listing 17. Adding an event handler for image manipulation

    private void MainImageManipulationDelta(object sender, ManipulationDeltaEventArgs e)
    {
        FrameworkElement element = sender as FrameworkElement;
        if (element != null)
        {
            var transformMatrix = element.RenderTransform
                as MatrixTransform;
            var matrix = transformMatrix.Matrix;
            matrix.Translate(e.DeltaManipulation.Translation.X,
                e.DeltaManipulation.Translation.Y);
            ((MatrixTransform)element.RenderTransform).Matrix = matrix;
            e.Handled = true;
        }
    }

    Initially, you get the current RenderTransform of the image, use the Translate method to move it to the new position that the manipulation gives, and then assign it as the matrix for the RenderTransform of the image. At the end, you set the Handled property to true to tell WPF that this method has handled the touch event and WPF should not pass it on to other controls. This should allow the image to move when a user touches it.

    If you run the app and try to move the image, you will see that it works but not as expected—the image flickers while moving. All manipulations are calculated relative to the image, but because this image is moving, you may have recursive recalculations. To change this behavior, you must tell WPF that all delta manipulations should be relative to the main window. You do so by using the ManipulationStarting event and setting the ManipulationContainer property of the event arguments to the Canvas.

    In MainWindow.xaml, enter the code in Listing 18.

    Listing 18. Correcting image movement in MainWindow.xaml

    <Image x:Name="MainImage" Source="seattle.bmp" Width="400" 
           IsManipulationEnabled="True" 
           ManipulationDelta="MainImageManipulationDelta"
           ManipulationStarting="MainImageManipulationStarting">

    In MainWindow.xaml.cs, enter the code in Listing 19.

    Listing 19. Correcting image movement in MainWindow.xaml.cs

    private void MainImageManipulationStarting(object sender, ManipulationStartingEventArgs e)
    {
        e.ManipulationContainer = LayoutRoot;
    }

    Now, when you run the app and move the image, it moves with no flicker.

    Adding Scaling and Rotation

    To enable resizing and rotation, you must use the Scale and Rotation properties of the DeltaManipulation. These manipulations need a fixed center point. For example, if you fix the center point at the top left of the image, elements will be scaled and rotated around this point. To get a correct translation and rotation, you must set this point to the origin of the manipulation. You can set the correct scaling and rotation in code similar to Listing 20.

    Listing 20. Setting scaling and rotation

    private void MainImageManipulationDelta(object sender, ManipulationDeltaEventArgs e)
    {
        FrameworkElement element = sender as FrameworkElement;
        if (element != null)
        {
            var transformMatrix = element.RenderTransform
                as MatrixTransform;
            var matrix = transformMatrix.Matrix;
            matrix.Translate(e.DeltaManipulation.Translation.X,
                e.DeltaManipulation.Translation.Y);
            var centerPoint = LayoutRoot.TranslatePoint(
                e.ManipulationOrigin, element);
            centerPoint = matrix.Transform(centerPoint);
            matrix.RotateAt(e.DeltaManipulation.Rotation,
              centerPoint.X, centerPoint.Y);
            matrix.ScaleAt(e.DeltaManipulation.Scale.X, e.DeltaManipulation.Scale.Y,
              centerPoint.X, centerPoint.Y);
            ((MatrixTransform)element.RenderTransform).Matrix = matrix;
            e.Handled = true;
        }
    }

    Adding Inertia

    When you run the app, you will see that the image moves, scales, and rotates fine, but as soon as you stop moving the image, it stops. This is not the desired behavior. You want the same behavior you have when you move an image on a smooth table. It should continue moving slower and slower until it stops completely. You can achieve this effect by using the ManipulationInertiaStarting event. In this event, you state the desired deceleration in pixels (or degrees) per millisecond squared. If you set a smaller value, it will take longer for the element to stop (like on an icy table); if you set deceleration to a larger value, the object takes less time to stop (like on a rough table). Set this value to 0.005.

    In MainWindow.xaml, enter the code in Listing 21.

    Listing 21. Setting deceleration in MainWindow.xaml

    <Image x:Name="MainImage" Source="seattle.bmp" Width="400" 
           IsManipulationEnabled="True" 
           ManipulationDelta="MainImageManipulationDelta"
           ManipulationStarting="MainImageManipulationStarting"
           ManipulationInertiaStarting="MainImageManipulationInertiaStarting"/>

    In MainWindow.xaml.cs, enter the code in Listing 22.

    Listing 22. Setting deceleration in MainWindow.xaml.cs

    private void MainImageManipulationInertiaStarting(object sender, 
        ManipulationInertiaStartingEventArgs e)
    {
        e.RotationBehavior.DesiredDeceleration = 0.005; // degrees/ms^2 
        e.TranslationBehavior.DesiredDeceleration = 0.005; // pixels/ms^2
    }

    Limiting the Inertial Movement

    Now, when you run the app, you will see that the manipulations seem close to the physical behavior. But if you give the object a good flick, it goes out of the window, and you have to restart the program. To limit the inertial movement, you must determine whether the delta manipulation is inertial (the user has already lifted his or her finger) and stop it if it reaches the border. You do this with the code in the ManipulationDelta event handler, shown in Listing 23.

    Listing 23. Limiting inertial movement

    private void MainImageManipulationDelta(object sender, ManipulationDeltaEventArgs e)
    {
        FrameworkElement element = sender as FrameworkElement;
        if (element != null)
        {
            Matrix matrix = new Matrix();
            MatrixTransform transformMatrix = element.RenderTransform
                as MatrixTransform;
            if (transformMatrix != null)
            {
                matrix = transformMatrix.Matrix;
            }
            matrix.Translate(e.DeltaManipulation.Translation.X,
                e.DeltaManipulation.Translation.Y);
            var centerPoint = new Point(element.ActualWidth / 2, 
                element.ActualHeight / 2);
            matrix.RotateAt(e.DeltaManipulation.Rotation,
              centerPoint.X, centerPoint.Y);
            matrix.ScaleAt(e.DeltaManipulation.Scale.X, e.DeltaManipulation.Scale.Y,
              centerPoint.X, centerPoint.Y);
            element.RenderTransform = new MatrixTransform(matrix);
    
            var containerRect = new Rect(LayoutRoot.RenderSize);
            var elementRect = element.RenderTransform.TransformBounds(
                              VisualTreeHelper.GetDrawing(element).Bounds);
            if (e.IsInertial && !containerRect.Contains(elementRect))
                e.Complete();
            e.Handled = true;
        }
    }

    Now, determine whether the transformed image rectangle is in the container rectangle. If it isn’t and the movement is inertial, stop the manipulation. That way, the movement stops, and the image doesn’t go out of the window.

    Conclusion


    As you can see, adding touch manipulations to a WPF application is fairly easy. You can start with the default behavior and add gestures or full touch support with a few changes to the code. One important thing to do in any touch-enabled app is to rethink the UI so that users feel comfortable using it. You can also use different styles for the controls on a touch device, so the buttons are larger and the lists are more widely spaced only when using touch. With touch devices becoming increasingly common, optimizing your apps for touch will make them easier to use, thus pleasing your existing users and attracting new ones.

    For More Information


    About the Author


    Bruno Sonnino is a Microsoft Most Valuable Professional (MVP) located in Brazil. He is a developer, consultant, and author having written five Delphi books, published in Portuguese by Pearson Education Brazil, and many articles for Brazilian and American magazines and websites.

    Intel, the Intel logo, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.
    Copyright © 2013 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

  • ULTRABOOK™
  • applications
  • 2 in 1
  • Touch Manipulation
  • Visual Studio
  • XAML
  • WPF
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • C#
  • Intermediate
  • Microsoft Windows* 8 Desktop
  • Sensors
  • User Experience and Design
  • Laptop
  • Tablet
  • Desktop
  • URL

  • Developing Great HTML5 Apps for PC

    $
    0
    0

    Download Article


    Developing Great HTML5 Apps for PC [PDF 591KB]

    HTML5 is an increasingly popular choice for software developers due to its support of rapid UI development and promise of simplified porting to multiple devices. On some devices, native apps have an advantage over HTML5 apps because native apps can fully use device capabilities. When considering HTML5 apps on PCs, most of those advantages can be made available to HTML5 apps with a few simple steps. This article will show you how to make your HTML5 app the best it can be on a PC.

    User agents for HTML5 apps on PCs


    HTML5 apps run within a user agent; a runtime environment designed to translate the HTML, CSS, and JavaScript* into an interactive experience. On the PC, the most common user agents are web browsers. For many years, browsers were the only options for HTML5 deployment, but now operating systems are providing their own user agents to support HTML5 directly. Microsoft Windows* 8 provides one in the form of Windows Store app APIs. Each of these user agents has different levels of support for PC device features; you should choose the right user agent based on its support for the features your app needs.

    HTML5 is versatile and adaptable. Web apps can work on virtually any platform and operating system without porting, with specific features available depending on the browser used. Using Visual Studio*, HTML5 apps can be crafted for Windows 8—both Desktop and Windows Store—allowing access to more and better features than any web browser. Web developers can now use their HTML, JavaScript, and CSS expertise in native apps.

    HTML5 in web browsers


    The much-desired “write once, run anywhere” mindset for HTML5 finds its home in web browsers. The platform agnostic code uses the browser as an abstraction layer, with straightforward detection of available functionality. Maintenance and feature changes are made to a single code base, without the need for redundant rewriting and inconsistent versions among different devices.

    As the name implies, a web app is easily distributed online; no downloading is required for distribution. This makes it ideal for a variety of projects, from social media games to enterprise-level tools. Code changes go live instantly, with no patching or app update necessary. Web apps can be run locally, but using the internet (or at least a network) simplifies distribution.

    HTML5 in packaged apps


    HTML has always been oriented towards visual composition and user experience, increasingly so with CSS. This lends itself easily to creating user-friendly apps, leveraging the vast array of web developers for packaged apps in Windows 8.

    On Windows 8, this works by running the app on the Internet Explorer* (IE) engine. IE has been historically shunned by web developers for being slow and non-standard, but these concerns are growing outdated. IE support of HTML5 and JavaScript has more than doubled from versions 9 to 10, with nearly half that again from 10 to 11 including WebGL*. Windows 8 augments the browser engine, increasing web app performance.

    The lower-level features of Windows 8 are afforded by the WinJS framework. This API picks up where the browser stops, offering greater device access.

    Windows 8 Desktop UI

    There are many approaches to Windows 8 Desktop packaging of HTML5 apps. Windows 8 Desktop presents touch-enabled apps in a familiar environment so users are comfortable with the appearance. Using Win32* libraries is possible here, and Win32 performance analysis tools can aid your development. Many device feature libraries use Win32 functions that are not available to Windows Store apps, restricting their use to the Desktop UI.

    Windows Store Apps

    Windows Store apps can be built from HTML5 as well. Windows Store app style lends itself to low-overhead touch interfaces focused on content. These apps have intuitive controls that use layout and text features to guide the user experience.

    These apps also have access to the charms, a succinct selection of functional mechanics including search and share. By swiping in from the right of the screen (or moving a cursor to either right corner) the charms menu will appear. Windows Store apps can tap into these features, allowing content to be searched and shared among apps.

    Windows 8 Live tiles work more like widgets than icons, updated while the app is not running in the foreground. These tiles serve to launch the apps as well as deliver at-a-glance content retrieved via web service. More information and development guidance can be found online.

    Key Device Features

    The choice of user agent on the PC determines which device features are available to your app. For a given device feature, API access methods vary for each user agent. Key features are described here with some guidance on how they can be accessed by user agents on the PC.

    Touch Gestures

    Touch gestures in any application on a Windows 8 touch-enabled device correspond to mouse movements, a tap works as a click, dragging works as scrolling, etc., but touch gestures make full use of the interface (Table 1). Many browsers have support for gestures as do Windows apps.

    Table 1. The touch-based nature of these apps is strongly supported by the basic touch gestures

    GestureDescription
    TapSingle touch and release. Used for primary action.
    Press and holdSingle touch and hold. Used to gain more information or access a context menu.
    SlideOne or more fingers touch and move in one direction. Used to pan/scroll through content.
    SwipeShort slide. Used perpendicular to panning content to select, command, or move an object.
    TurnTwo or more fingers touch the screen and rotate in a clockwise or counter-clockwise arc. Used to rotate content.
    PinchTwo or more fingers touch the screen and move toward each other. Used to zoom out.
    StretchTwo or more fingers touch the screen and move farther apart. Used to zoom in.


    Sensor Integration
    HTML5 has features to get geolocation data, the real-world geographical location of the device. Maximum fidelity is achieved by combining all information available, including IP geolocation, GPS, Wi-Fi* positioning, and cell tower triangulation. This is widely available in browsers as well as Windows 8 apps.

    In addition to geolocation, Windows 8 provides APIs that support a suite of other sensors (Table 2). By combining data from the accelerometer, gyrometer, and magnetometer in an approach called “sensor fusion,” jitter is reduced while minimizing latency. This approach allows for a few more sensors as well. While sensors are available in both Desktop and Windows Store apps, Desktop apps can change the sensitivity to suit specific needs. More information is available on MSDN.

    Table 2. Sensor data allows new control interfaces

    SensorDescription
    3D AccelerometerCaptures acceleration on X, Y, and Z axes.
    3D GyrometerCaptures changes in angular velocity.
    3D CompassCaptures changes in orientation.
    3D InclinometerCaptures changes in the inclination angle.
    LightSenses changes in ambient lighting.
    Device OrientationSensor fusion provides fine-grain orientation information.
    Simple OrientationHigh-level orientation information such as facing up, facing down, and rotations of 90, 180, or 270 degrees.

    3D Rendering using WebGL


    WebGL, the web 3D graphics standard, is usable in everything including Windows Store apps as of Windows 8.1. Partial support is available in Firefox*, but IE11 and Chrome* are fully capable of rendering 3D games and apps. As WebGL matures, HTML5 will become an even more viable platform for all varieties of games.

    Communicating with Other Devices


    Similar to mobile devices, larger form factors such as Ultrabook™ devices can now make use of Near-Field Communication (NFC) to transmit information between devices. This is confirmed in Firefox, and the Windows.Networking.Proximity library for Windows 8.

    Responsive Design Using Media Queries


    One of the biggest advantages of HTML5 is support of media queries. These allow “responsive design”—dynamic content based on the viewing device information. Web sites previously redirected users to mobile versions of the content, often ruining the experience with formatting glitches or malfunctioning code. Now we can have a single code base that displays the content properly on any screen. Windows 8 goes a step further, providing the view state information so the app can adapt just as easily to being “snapped” to a portion of the screen.

    Responsive design is most commonly seen in rearrangement of web site content based on the size of the viewable area (Figure 1). This can be done using CSS, JavaScript, or both.



    Figure 1. Comparison of page layouts with varying window sizes

    The style sheet defining this layout contains conditional segments. Here, the spacing and arrangement are dependent on screen width using media query statements:

    @media screen and (max-width: 980px) { … }
    

    Any styling between the curly braces will apply when the viewport width is less than 980 pixels. Further segments are defined to continue refining the layout for incrementally smaller displays. This is an elegant solution to multiple screens, but the repeated styling can impede loading times if a large amount is loaded at once. To solve this, the queries can be done using JavaScript to conditionally load different CSS files, or manipulate the content directly.

    if ( screen.width <= 980 ) { … }
    

    This statement will compare in a similar manner, but varying browser standards make redundant checks necessary. A more efficient control is to use window.matchMedia to store a media query.

    var mediaQuery = window.matchMedia( "(max-width: 980px)" );
    

    This will store a media query with the given condition, which can be checked at any time using mediaQuery.matches (a Boolean value). Since this check only happens when the code is executed, it needs to be called whenever the window is resized. Fortunately, the variable can be given a listener function which executes upon changes:

    mediaQuery.addListener(CheckQuery);
    function CheckQuery(mq){ … }
    

    Within this function, relevant style sheets or JavaScript files can be loaded, controls can be added or removed, and any other modifications necessary to facilitate ease of use given the new size. Note: this function should also be called initially to set the content for the starting size, or it will only occur if the user resizes the window.

    The responsive design offered by media queries can serve in Windows apps by using the same code or modifying it for specific APIs in WinJS. With only superficial changes, the web app demo above was ported to a Windows Store app (Figure 2).



    Figure 2. Windows* Store app using media queries to modify the layout when snapped

    To make an app useful in snapped mode, it will often need modified controls. Detecting the layout change is necessary to maintain productivity and a positive user experience. These changes can also be handled by APIs unique to Windows 8. Windows.UI.ViewManagement.ApplicationView.value can be compared to Windows.UI.ViewManagement.ApplicationViewState.snapped to determine whether the application is in a snapped state, handled in a resize event as before.

    Integrating responsive design into web-based and compiled apps has long been a valuable feature for mobilization of web sites, but resizing the screen is no longer the only change users can make to the viewport. In the case of detachable 2-in-1 devices, an app needs to instantly convert its controls between laptop and tablet mode, often without any visible change in the screen size. Handling this smoothly can turn a potentially jarring transition into a slick feature point. More information on media queries and WinJS can be found in the references section below.

    Comparing the options


    Since many features are specific to individual browsers or app types, this chart compares what’s currently available.

    Web AppsWindows* 8 Desktop Apps Windows Store Apps
    Internet Explorer*Firefox*Chrome*
    Intel® WiDi TechnologyNoNoNoYes (Browser Plugins)No
    Touch GesturesNoYesYesYesYes
    Orientation Sensors IE11YesYesYesYes
    Light SensorNoBuggedNoYesYes
    WebGL*IE11PartialYesIE11Windows 8.1
    Intel® Smart ConnectNoNoNoYesYes
    NFCNoYesNoYesYes
    GPSYesYesYesYesYes
    Media QueriesYesYesYesYesYes
    DistributionOnlineOnlineOnlineIntel AppUp® program, Windows StoreWindows Store

    Requires an Intel® Wireless Display enabled PC, compatible adapter, and TV. 1080p and Blu-Ray* or other protected content playback only available on 2nd generation Intel® Core™ processor-based PCs with built-in visuals enabled. Consult your PC manufacturer. For more information, see www.intel.com/go/widi

    Helpful Tools


    In the past, developers have used text editors and visual layout tools to create HTML apps, with suites of complex library dependencies to port to other platforms. Now tools such as the Intel® XDK offer a cross-platform development environment to bring it all together. More information and downloads are available at http://software.intel.com/en-us/html5/tools.

    References


    HTML5 Apps on the Desktop - http://clintberry.com/2013/html5-apps-desktop-2013/
    Porting Web Apps to Windows 8, Adding Charms - http://html5hacks.com/blog/2013/04/02/make-your-web-app-to-a-windows-8-app/
    Live Tiles - http://stackoverflow.com/questions/7442760/how-are-live-tiles-made-in-windows-8
    Touch Support in Browsers - http://www.html5rocks.com/en/mobile/touchandmouse/
    Touch Support in Windows - http://windows.microsoft.com/en-us/windows7/using-touch-gestures
    Geolocation in Browsers - http://diveintohtml5.info/geolocation.html
    Geolocation in Windows - http://software.intel.com/en-us/articles/geo-location-on-windows-8-desktop-applications-using-winrt
    Sensor Integration - http://blogs.msdn.com/b/b8/archive/2012/01/24/supporting-sensors-in-windows-8.aspx
    WebGL in Firefox - https://developer.mozilla.org/en-US/docs/Web/WebGL
    WebGL Tutorial Blog - http://learningwebgl.com/blog/?page_id=1217
    Media Queries - http://msdn.microsoft.com/en-us/library/windows/apps/hh453556.aspx
    Snapped View - http://slickthought.net/post/2012/08/27/Windows-8-and-HTML-Part-7-Supporting-Snapped-View.aspx
    Desktop apps with WinRT APIs - http://software.intel.com/en-us/articles/building-new-windows-8-desktop-applications-using-new-windows-rt-features
    Intel XDK - http://software.intel.com/en-us/html5/tools

    About the Author


    Brad Hill is a Software Engineer at Intel in the Developer Relations Division. Brad investigates new technologies on Intel hardware and shares the best methods with software developers via the Intel Developer Zone and at developer conferences. He also runs Code for Good Student Hackathons at colleges and universities around the country.

    Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
    Copyright © 2013 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

  • Live Tiles
  • WinJS
  • ULTRABOOK™
  • app development
  • Touch Devices
  • Microsoft Windows* 8
  • HTML5
  • Graphics
  • Microsoft Windows* 8 Desktop
  • Sensors
  • User Experience and Design
  • Laptop
  • Tablet
  • Desktop
  • URL
  • PERCEPTUAL COMPUTING: Perceptual 3D Editing

    $
    0
    0

    Downloads


    PERCEPTUAL COMPUTING: Perceptual 3D Editing [PDF 839KB]

    By Lee Bamber

    1. Introduction


    If you’re familiar with Perceptual Computing and some of its applications, you will no doubt be wondering to what degree the technology can be used to create and manipulate the 3D world. Ever since the first batch of 3D games, modelling and motion capture have been parts of our app-making toolkit, and a wide variety of software and hardware has sprung up to answer the call for more immersive and life-like crafting.

    When looking at Perceptual Computing as the next natural technology to provide new solutions in this space, you might be tempted to think we can finally throw away our mice and keyboards and rely entirely on our real-world hands to create 3D content. When you start down this road, you begin to find it both a blessing and a curse, and this article will help you find your way with the help of one programmer’s map and a few sign posts.


    Figure 1. A simple 3D scene. The question is, can this be created with Perceptual Computing?

    Readers should have a basic understanding of Perceptual Computing as a concept and a familiarity with the hardware and software mechanisms required to control an application. No knowledge of programming or specific development platforms is required, only an interest in one possible evolutionary trajectory for 3D content creation.

    2. Why Is This Important


    It can be safety assumed that one of the benefits of increasingly more powerful devices will be the adoption of 3D as a preferred method of visual representation. The real-world is presented to us in glorious 3D, and remains our preferred dimension to interact and observe. It is fair to conclude that the demand for 3D content and the tools that create it will continue to increase, moving far beyond the modest needs of the games industry and become a global hunger.

    The current methods for creating 3D content and scenes are sufficient for the present, but what happens when five billion users want to experience new 3D content on a daily basis? A good 3D artist is expensive and hard to find, and good 3D content takes a long time to create! What if there was another way to fulfil this need?

    3. The Types of 3D Content


    If you are familiar with 3D game creation, you will be aware of several types of 3D content that go into a successful title. The terrain and structures that make up the location, the characters that play their roles, and the objects that populate your world and make everything a little more believable. You also have 3D panels and ‘heads-up-displays’ to feed information to the player, and a variety of 3D special effects to tantalize the watcher. How would we accomplish the creation of these different types using no mouse, no keyboard, no controller, or sculpting hardware? What might this creative process look like with Perceptual Computing?

    4. Editing Entire Worlds


    The terrain in a scene is often stretched out over an extremely large area, and either requires a team of designers to construct or a procedural function to randomize the world. When no specific location detail is required, you could use Perceptual Computing Voice Recognition to create your desired scene in a matter of seconds.

    Just imagine launching your new hands-free 3D editing tool by saying “New Scene.”


    Figure 2. The software immediately selects a brand new world for you to edit

    You then decide you want some vegetation, so you bring them forth as though by magic with the words “Add Trees.”


    Figure 3. With a second command, you have added trees

    You want your scene to be set at midnight, so you say “Set Time To Midnight.”


    Figure 4. Transform the scene completely by using a night setting

    Finally to make your creation complete, you say “More Hills” and the tool instantly responds by adding a liberal sprinkling of hills into your scene.


    Figure 5. Making the terrain more interesting with a few extra hills.

    The user has effectively created an entire forest world, lumpy and dark, in just a few seconds. You can perhaps see the possibilities for increased productivity here, but you also see that we have removed the need for any special 3D skills as well. Anyone can now create their own 3D landscapes; all they need is a voice and a few common phrases. If at any time they get confused, they can say “Help “and a full selection of command words displays.

    5. Editing 3D in Detail


    The world editing example is nothing remarkable, or exclusively the domain of Perceptual Computing, but suggestive of the types of interfaces that can be created when you think out of the box. The real challenge comes when you want to edit specific details, and this is where Perceptual Computing takes center stage.

    Now imagine during the general world editing you wanted to create something specific, let’s say a particularly gnarled tree, the “Add Tree” command would be too generalized and random. So just as you would in real-life, you point at the screen and then say “Add Tree There.”


    Figure 6. As the user points, the landscape highlights to indicate where you’re pointing

    Unfortunately the engine assumed you wanted the tree in context and selected the same tree as the previous additions. It is fortunate then that our revolutionary new tool understands various kinds of context, whether it be selection context or location context. By saying “Change Tree to Gnarled,” the tree instantly transforms into a more appropriate visual.


    Figure 7. Our scene now has specific content created exactly where the user wanted it

    As you increase the vocabulary of the tool, your user is able to add, change, and remove an increasing number of objects, whether they are specific objects or more general world properties. You can imagine the enormous fun you can have making things pop in and out of existence, or transforming your entire world with a single word.

    For locomotion around your world, exactly the same interface is used but with additional commands. You could point to the top of a hill or distant mountain and say “Go There.” Camera rotation could be tackled with a gestured phrase “Look At That,” and when you want to save your position for later editing, use commands such as “Remember This Location” and “Return To Last Location.”

    6. The Trouble with 3D Editing


    No article would be complete without an impartial analysis of the disadvantages to this type of interface, and the consequences for your application.

    One clear advantage a mouse will have over a Perceptual coordinate is that the mouse pointer can set and hold a specific coordinate for seconds and minutes at a time without flinching. You could even go and make a cup of tea, and be very confident your pointer will be at the same coordinate when you return. A Perceptual coordinate however, perhaps provided by a pointing finger at the screen, will rarely be able to maintain a fixed coordinate for a fraction of a second, and the longer the user attempts to maintain a fixed point the more annoyed they will get.

    A keyboard can instantly communicate one of 256 different states in the time it takes to look and press. To get the Perceptual Camera to identify one of 256 distinct and correct signals in the same amount of time would be ambitious at best.

    Given these comparisons, it should be stated that even though you can increase productivity tenfold on the creation of entire worlds, the same level of production could decrease dramatically if you tried to draw some graffiti onto the side of a wall or building. If you could ever summon a laser to shoot out of your finger, or gain the power of eye lasers, you would quickly discover just how difficult it is to create even a single straight line.

    The lesson here is that the underlying mechanism of the creative process should be considered entirely. We can draw a straight line with the mouse, touchpad, and pen because we’re practised at it. We are not practised at doing it with a finger in mid-air. The solution would be to pre-create the straight line ahead of time and have the finger simply apply the context so the software knows where to place the line. We don’t want to create a “finger pointer.” We want to place a straight line on the wall, so we change the fundamental mechanism to suit our Perceptual approach, and then it works just fine.

    7. Other types of 3D Editing


    The same principals can be applied to the creation of structures, characters, creatures, inanimate objects, and pretty much anything else you can imagine for your 3D scene. A combination of context, pointing, and voice control can achieve an incredible range of creative outcomes.

    Characters - Just as you design your avatars in popular gaming consoles or your favourite RPG, why not have the camera scan you to get a starting point for creating the character. Hair color, head size, facial features, and skin color can all be read instantly and converted into attributes in the character creation process. Quickly identifying which part of the body you want to work on and then rolling through a selection would be more like shopping that creating, and much more enjoyable.

    Story Animation – Instead of hiring an expensive motion capture firm, why not record your own voice-over scripts in front of the Perceptual Camera. It would read not only your voice, but also track your upper body skeleton and imprint those motions onto the character you intend to apply the speech. Your characters will now sound and animate as real as the very best AAA productions!

    Structures – The combination of a relatively small number of attributes can produce millions of building designs and all done in a few seconds. Take these two examples and the buildings created from two series of commands: “Five storeys. Set To Brick. Five Windows. [point] Add Door. [point] Remove Window” and “One storey. [point] Add Window. Go To Back. Add Three Doors. Set To Wood.” Naturally the tool would have to construct the geometry and make smart decisions about the interconnectivity of elements but the types of element are not inexhaustible.

    8. Tricks and Tips


    Do’s

    • Make it a habit to continually compare your Perceptual solution with the traditional methods. If it’s more difficult, or less satisfying, should it really be used?
    • Try your new interface models on new users periodically. If your goal is a more accessible editing system, you should be looking for users without traditional creativity skills.
    • Remember that when using voice recognition, individual accents and native languages will play a huge role in how your final software is received. Traditional software development will not have prepared you for the level of testing required in this area.
    • Experiment with additional technologies that complement the notion of hands-free intelligent interfaces. Look at virtual reality, augmented reality, and additional sensors.

    Don’ts

    • Do not create interfaces that require the user to hold their arm forward for prolonged periods of time. It’s uncomfortable for the user, and very fatiguing long-term.
    • Do not eliminate the keyboard, mouse, or controller from your consideration when developing new Perceptual solutions. You might find mouse- and voice-control is right for your project, or keyboard and “context pointing” in another.
    • Do not assume project lengths can be determined when delving into this type of experimental development. You will be working with early technology in brand new territories so your deliverables should not be set in stone.

    9. Final Thoughts


    As technology enthusiasts, we wait for the days of Holodeck’s and chatting with your computer as a member of the family. It may surprise you to learn we’re already on that road and these magical destinations are a lot closer than you might think. Voice recognition is now usable for everyday applications; the computer can detect what we are looking at to gain context, and we have the processing power to produce software systems expert enough to fill in the gaps when required.

    All we need is a few brave developers stubborn enough to reject yesterday’s solutions and become pioneers in search of new ones. Hopefully, I have painted an attractive picture of how creativity can be expressed without the need for traditional hardware, and highlighted the fact that the technology exists right now. This is not just a better tool or quicker process, but a wholesale transformation of how creativity can be expressed on the computer. By removing all barriers to entry, and eliminating the need for technical proficiencies, Perceptual Computing has the power to democratise creativity on a scale never before seen in our industry.

    About The Author


    When not writing articles, Lee Bamber is the CEO of The Game Creators (http://www.thegamecreators.com), a British company that specializes in the development and distribution of game creation tools. Established in 1999, the company and surrounding community of game makers are responsible for many popular brands including Dark Basic, FPS Creator, and most recently App Game Kit (AGK).

    The application that inspired this article and the blog that tracked its seven week development can be found here: http://ultimatecoderchallenge.blogspot.co.uk/2013/02/lee-going-perceptual-part-one.html

    Lee also chronicles his daily life as a coder, complete with screen shots and the occasional video here: http://fpscreloaded.blogspot.co.uk

    Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
    Copyright © 2013 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

  • ULTRABOOK™
  • applications
  • Perceptual Computing
  • 3D editing
  • Developers
  • Windows*
  • Intermediate
  • Intel® Perceptual Computing SDK
  • Perceptual Computing
  • Sensors
  • User Experience and Design
  • Laptop
  • Tablet
  • Desktop
  • URL
  • Developing Sensor Applications on Intel® Atom™ Processor-Based Android* Phones and Tablets

    $
    0
    0

    Download Article

    Developing Sensor Applications on Intel® Atom™ Processor-Based Android* Phones and Tablets [PDF 607KB]

    Developing Sensor Applications on Intel® Atom™ Processor-Based Android* Phones and Tablets


    This guide provides application developers with an introduction to the Android Sensor framework and discusses how to use some of the sensors that are generally available on phones and tablets based on the Intel® Atom™ processor. Among those discussed are the motion, position, and environment sensors. Even though GPS is not strictly categorized as a sensor in the Android framework, this guide discusses GPS-based location services as well. The discussion in this guide is based on Android 4.2, Jelly Bean.

    Sensors on Intel® Atom™ Processor-Based Android Phones and Tablets


    The Android phones and tablets based on Intel Atom processors can support a wide range of hardware sensors. These sensors are used to detect motion and position changes, and report the ambient environment parameters. The block diagram in Figure 1 shows a possible sensor configuration on a typical Intel Atom processor-based Android device.


    Figure 1. Sensors on an Intel® Atom™–based Android system

    Based on the data they report, we can categorize Android sensors into the classes and types shown in Table 1.

    Motion SensorsAccelerometer
    (TYPE_ACCELEROMETER)
    Measures a device’s accelerations in m/s2Motion detection
    Gyroscope
    (TYPE_GYROSCOPE)
    Measures a device’s rates of rotationRotation detection
    Position SensorsMagnetometer
    (TYPE_MAGNETIC_FIELD)
    Measures the Earth’s geomagnetic field strengths in µTCompass
    Proximity
    (TYPE_PROXIMITY)
    Measures the proximity of an object in cmNearby object detection
    GPS
    (not a type of
    android.hardware.Sensor)
    Gets accurate geo-locations of the deviceAccurate geo-location detection
    Environment SensorsALS
    (TYPE_LIGHT)
    Measures the ambient light level in lxAutomatic screen brightness control
    BarometerMeasures the ambient air pressure in mbarAltitude detection

    Table 1. Sensor Types Supported by the Android Platform
     

    Android Sensor Framework


    The Android sensor framework provides mechanisms to access the sensors and sensor data, with the exception of the GPS, which is accessed through the Android location services. We will discuss this later in this paper. The sensor framework is part of the android.hardware package. Table 2 lists the main classes and interfaces of the sensor framework.

    NameTypeDescription
    SensorManagerClassUsed to create an instance of the sensor service. Provides various methods for accessing sensors, registering and unregistering sensor event listeners, and so on.
    SensorClassUsed to create an instance of a specific sensor.
    SensorEventClassUsed by the system to publish sensor data. It includes the raw sensor data values, the sensor type, the data accuracy, and a timestamp.
    SensorEventListenerInterfaceProvides callback methods to receive notifications from the SensorManager when the sensor data or the sensor accuracy has changed.

    Table 2. The Android Platform Sensor Framework

    Obtaining Sensor Configuration

    Device manufacturers decide what sensors are available on the device. You must discover which sensors are available at runtime by invoking the sensor framework’s SensorManager getSensorList() method with a parameter “Sensor.TYPE_ALL”. Code Example 1 displays a list of available sensors and the vendor, power, and accuracy information of each sensor.

    package com.intel.deviceinfo;
    	
    import java.util.ArrayList;
    import java.util.HashMap;
    import java.util.List;
    import java.util.Map;
    
    import android.app.Fragment;
    import android.content.Context;
    import android.hardware.Sensor;
    import android.hardware.SensorManager;
    import android.os.Bundle;
    import android.view.LayoutInflater;
    import android.view.View;
    import android.view.ViewGroup;
    import android.widget.AdapterView;
    import android.widget.AdapterView.OnItemClickListener;
    import android.widget.ListView;
    import android.widget.SimpleAdapter;
    	
    public class SensorInfoFragment extends Fragment {
    	
        private View mContentView;
    	
        private ListView mSensorInfoList;	
        SimpleAdapter mSensorInfoListAdapter;
    	
        private List<Sensor> mSensorList;
    
        private SensorManager mSensorManager;
    	
        @Override
        public void onActivityCreated(Bundle savedInstanceState) {
            super.onActivityCreated(savedInstanceState);
        }
    	
        @Override
        public void onPause() 
        { 
            super.onPause();
        }
    	
        @Override
        public void onResume() 
        {
            super.onResume();
        }
    	
        @Override
        public View onCreateView(LayoutInflater inflater, ViewGroup container,
                Bundle savedInstanceState) {
            mContentView = inflater.inflate(R.layout.content_sensorinfo_main, null);
            mContentView.setDrawingCacheEnabled(false);
    	
            mSensorManager = (SensorManager)getActivity().getSystemService(Context.SENSOR_SERVICE);
    	
            mSensorInfoList = (ListView)mContentView.findViewById(R.id.listSensorInfo);
    		
            mSensorInfoList.setOnItemClickListener( new OnItemClickListener() {
    			
                @Override
                public void onItemClick(AdapterView<?> arg0, View view, int index, long arg3) {
    				
                    // with the index, figure out what sensor was pressed
                    Sensor sensor = mSensorList.get(index);
    				
                    // pass the sensor to the dialog.
                    SensorDialog dialog = new SensorDialog(getActivity(), sensor);
    
                    dialog.setContentView(R.layout.sensor_display);
                    dialog.setTitle("Sensor Data");
                    dialog.show();
                }
            });
    		
            return mContentView;
        }
    	
        void updateContent(int category, int position) {
            mSensorInfoListAdapter = new SimpleAdapter(getActivity(), 
    	    getData() , android.R.layout.simple_list_item_2,
    	    new String[] {
    	        "NAME",
    	        "VALUE"
    	    },
    	    new int[] { android.R.id.text1, android.R.id.text2 });
    	mSensorInfoList.setAdapter(mSensorInfoListAdapter);
        }
    	
    	
        protected void addItem(List<Map<String, String>> data, String name, String value)   {
            Map<String, String> temp = new HashMap<String, String>();
            temp.put("NAME", name);
            temp.put("VALUE", value);
            data.add(temp);
        }
    	
    	
        private List<? extends Map<String, ?>> getData() {
            List<Map<String, String>> myData = new ArrayList<Map<String, String>>();
            mSensorList = mSensorManager.getSensorList(Sensor.TYPE_ALL);
    		
            for (Sensor sensor : mSensorList ) {
                addItem(myData, sensor.getName(),  "Vendor: " + sensor.getVendor() + ", min. delay: " + sensor.getMinDelay() +", power while in use: " + sensor.getPower() + "mA, maximum range: " + sensor.getMaximumRange() + ", resolution: " + sensor.getResolution());
            }
            return myData;
        }
    }

    Code Example 1. A Fragment that Displays the List of Sensors**

    Sensor Coordinate System

    The sensor framework reports sensor data using a standard 3-axis coordinate system, where X, Y, and Z are represented by values[0], values[1], and values[2] in the SensorEvent object, respectively.

    Some sensors, such as light, temperature, proximity, and pressure, return only single values. For these sensors only values[0] in the SensorEvent object are used.

    Other sensors report data in the standard 3-axis sensor coordinate system. The following is a list of such sensors:

    • Accelerometer
    • Gravity sensor
    • Gyroscope
    • Geomagnetic field sensor

    The 3-axis sensor coordinate system is defined relative to the screen of the device in its natural (default) orientation. For a phone, the default orientation is portrait; for a tablet, the natural orientation is landscape. When a device is held in its natural orientation, the x axis is horizontal and points to the right, the y axis is vertical and points up, and the z axis points outside of the screen (front) face. Figure 2 shows the sensor coordinate system for a phone, and Figure 3 for a tablet.


    Figure 2. The sensor coordinate system for a phone


    Figure 3. The sensor coordinate system for a tablet

    The most important point regarding the sensor coordinate system is that the sensor’s coordinate system never changes when the device moves or changes its orientation.

    Monitoring Sensor Events

    The sensor framework reports sensor data with the SensorEvent objects. A class can monitor a specific sensor’s data by implementing the SensorEventListener interface and registering with the SensorManager for the specific sensor. The sensor framework informs the class about the changes in the sensor states through the following two SensorEventListener callback methods implemented by the class:

    onAccuracyChanged()

    and

    onSensorChanged()

    Code Example 2 implements the SensorDialog used in the SensorInfoFragment example we discussed in the section “Obtaining Sensor Configuration.”

    package com.intel.deviceinfo;
    
    import android.app.Dialog;
    import android.content.Context;
    import android.hardware.Sensor;
    import android.hardware.SensorEvent;
    import android.hardware.SensorEventListener;
    import android.hardware.SensorManager;
    import android.os.Bundle;
    import android.widget.TextView;
    
    public class SensorDialog extends Dialog implements SensorEventListener {
        Sensor mSensor;
        TextView mDataTxt;
        private SensorManager mSensorManager;
    
        public SensorDialog(Context ctx, Sensor sensor) {
            this(ctx);
            mSensor = sensor;
        }
    	
        @Override
        protected void onCreate(Bundle savedInstanceState) {
            super.onCreate(savedInstanceState);
            mDataTxt = (TextView) findViewById(R.id.sensorDataTxt);
            mDataTxt.setText("...");
            setTitle(mSensor.getName());
        }
    	
        @Override
        protected void onStart() {
            super.onStart();
            mSensorManager.registerListener(this, mSensor,  SensorManager.SENSOR_DELAY_FASTEST);
        }
    		
        @Override
        protected void onStop() {
            super.onStop();
            mSensorManager.unregisterListener(this, mSensor);
        }
    
        @Override
        public void onAccuracyChanged(Sensor sensor, int accuracy) {
        }
    
        @Override
        public void onSensorChanged(SensorEvent event) {
            if (event.sensor.getType() != mSensor.getType()) {
                return;
            }
            StringBuilder dataStrBuilder = new StringBuilder();
            if ((event.sensor.getType() == Sensor.TYPE_LIGHT)||
                (event.sensor.getType() == Sensor.TYPE_TEMPERATURE)||
                (event.sensor.getType() == Sensor.TYPE_PRESSURE)) {
                dataStrBuilder.append(String.format("Data: %.3fn", event.values[0]));
            }
            else{         
                dataStrBuilder.append( 
                    String.format("Data: %.3f, %.3f, %.3fn", 
                    event.values[0], event.values[1], event.values[2] ));
            }
            mDataTxt.setText(dataStrBuilder.toString());
        }
    }

    Code Example 2.A Dialog that Shows the Sensor Values**

    Motion Sensors

    Motion sensors are used to monitor device movement, such as shake, rotate, swing, or tilt. The accelerometer and gyroscope are two motion sensors available on many tablet and phone devices.

    Motion sensors report data using the sensor coordinate system, where the three values in the SensorEvent object, values[0], values[1], and values[2], represent the x-, y-, and z-axis values, respectively.

    To understand the motion sensors and apply the data in an application, we need to apply some physics formulas related to force, mass, acceleration, Newton’s laws of motion, and the relationship between several of these entities in time. To learn more about these formulas and relationships, refer to your favorite physics textbooks or public domain sources.

    Accelerometer

    The accelerometer measures the acceleration applied on the device, and its properties are summarized in Table 3.

    SensorTypeSensorEvent
    Data (m/s2)
    Description
    AccelerometerTYPE_ACCELEROMETERvalues[0]
    values[1]
    values[2]
    Acceleration along the x axis
    Acceleration along the y axis
    Acceleration along the z axis

    Table 3. The Accelerometer

    The concept for the accelerometer is derived from Newton’s second law of motion:

    a = F/m

    The acceleration of an object is the result of the net external force applied to the object. The external forces include one that applies to all objects on Earth, gravity. It is proportional to the net force F applied to the object and inversely proportional to the object’s mass m.

    In our code, instead of directly using the above equation, we are more concerned about the result of the acceleration during a period of time on the device’s speed and position. The following equation describes the relationship of an object’s velocity v1, its original velocity v0, the acceleration a, and the time t:

    v1 = v0 + at

    To calculate the object’s position displacement s, we use the following equation:

    s = v0t + (1/2)at2

    In many cases we start with the condition v0 equal to 0 (before the device starts moving), which simplifies the equation to:

    s = at2/2

    Because of gravity, the gravitational acceleration, represented with the symbol g, is applied to all objects on Earth. Regardless of the object’s mass, g only depends on the latitude of the object’s location with a value in the range of 9.78 to 9.82 (m/s2). We adopt a conventional standard value for g:

    g = 9.80665 (m/s2)

    Because the accelerometer returns the values using a multidimensional device coordinate system, in our code we can calculate the distances along the x, y, and z axes using the following equations:

    Sx = AxT2/2
    Sy=AyT2/2
    Sz=AzT2/2

    Where Sx, Sy, and Sz are the displacements on the x axis, y axis, and z axis, respectively, and Ax, Ay, and Az are the accelerations on the x axis, y axis, and z axis, respectively. T is the time of the measurement period.

    public class SensorDialog extends Dialog implements SensorEventListener {
        …	
        private Sensor mSensor;
        private SensorManager mSensorManager;
    	
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mSensor = mSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
        …
    }

    Code Example 3. Instantiation of an Accelerometer**

    Sometimes we don’t use all three dimension data values. Other imes we may also need to take the device’s orientation into consideration. For example, for a maze application, we only use the x-axis and y-axis gravitational acceleration to calculate the ball’s moving directions and distances based on the orientation of the device. The following code fragment (Code Example 4) outlines the logic.

    @Override
    public void onSensorChanged(SensorEvent event) {
        if (event.sensor.getType() != Sensor.TYPE_ACCELEROMETER) {
            return;
        } 
    float accelX, accelY;
    …
    //detect the current rotation currentRotation from its “natural orientation”
    //using the WindowManager
        switch (currentRotation) {
            case Surface.ROTATION_0:
                accelX = event.values[0];
                accelY = event.values[1];
                break;
            case Surface.ROTATION_90:
                accelX = -event.values[0];
                accelY = event.values[1];
                break;
            case Surface.ROTATION_180:
                accelX = -event.values[0];
                accelY = -event.values[1];
                break;
            case Surface.ROTATION_270:
                accelX = event.values[0];
                accelY = -event.values[1];
                break;
        }
        //calculate the ball’s moving distances along x, and y using accelX, accelY and the time delta
            …
        }
    }

    Code Example 4.Considering the Device Orientation When Using the Accelerometer Data in a Maze Game**

    Gyroscope


    The gyroscope (or simply gyro) measures the device’s rate of rotation around the x , y, and z axes, as shown in Table 4. The gyroscope data values can be positive or negative. By looking at the origin from a position along the positive half of the axis, if the rotation is counterclockwise around the axis, the value is positive; if the rotation around the axis is clockwise, the value is negative. We can also determine the direction of a gyroscope value using the “right-hand rule,” illustrated in Figure 4.


    Figure 4. Using the “right-hand rule” to decide the positive rotation direction

    SensorTypeSensorEvent
    Data (rad/s)
    Description
    GyroscopeTYPE_GYROSCOPEvalues[0]
    values[1]
    values[2]
    Rotation rate around the x axis
    Rotation rate around the y axis
    Rotation rate around the z axis

    Table 4. The Gyroscope

    Code Example 5 shows how to instantiate a gyroscope.

    public class SensorDialog extends Dialog implements SensorEventListener {
        …	
        private Sensor mGyro;
        private SensorManager mSensorManager;
    	
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mGyro = mSensorManager.getDefaultSensor(Sensor.TYPE_GYROSCOPE);
        …
    }

    Code Example 5.Instantiation of a Gyroscope**

    Position Sensors

    Many Android tablets support two position sensors: the magnetometer and the proximity sensors. The magnetometer measures the strengths of the Earth’s magnetic field along the x, y, and z axes, while the proximity sensor detects the distance of the device from another object.

    Magnetometer

    The most important usage of the magnetometer (described in Table 5) in Android systems is to implement the compass.

    SensorTypeSensorEvent
    Data (µT)
    Description
    MagnetometerTYPE_MAGNETIC_FIELDvalues[0]
    values[1]
    values[2]
    Earth magnetic field strength along the x axis
    Earth magnetic field strength along the y axis
    Earth magnetic field strength along the z axis

    Table 5. The Magnetometer

    Code Example 6 shows how to instantiate a magnetometer.

    public class SensorDialog extends Dialog implements SensorEventListener {
        …	
        private Sensor mMagnetometer;
        private SensorManager mSensorManager;
    	
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mMagnetometer = mSensorManager.getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD);
        …
    }

    Code Example 6.Instantiation of a Magnetometer**

    Proximity

    The proximity sensor provides the distance between the device and another object. The device can use it to detect if the device is being held close to the user (see Table 6), thus determining if the user is on a phone call and turning off the display during the phone call.

    Table 6: The Proximity Sensor
    SensorTypeSensorEvent
    Data
    Description
    ProximityTYPE_PROXIMITYvalues[0]Distance from an object in cm. Some proximity sensors only report a Boolean value to indicate if the object is close enough.

    Code Example 7 shows how to instantiate a proximity sensor.

    public class SensorDialog extends Dialog implements SensorEventListener {
        …	
        private Sensor mProximity;
        private SensorManager mSensorManager;
    	
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mProximity = mSensorManager.getDefaultSensor(Sensor.TYPE_PROXIMITY);
        …
    }

    Code Example 7.Instantiation of a Proximity Sensor**

    Environment Sensors

    The environment sensors detect and report the device’s ambient environment parameters, such as light, temperature, pressure, or humidity. The ambient light sensor (ALS) and the pressure sensor (barometer) are available on many Android tablets.

    Ambient Light Sensor (ALS)

    The ambient light sensor, described in Table 7, is used by the system to detect the illumination of the surrounding environment and automatically adjust the screen brightness accordingly.

    Table 7: The Ambient Light Sensor
    SensorTypeSensorEvent
    Data (lx)
    Description
    ALSTYPE_LIGHTvalues[0]The illumination around the device

    Code Example 8 shows how to instantiate the ALS.

    …	
        private Sensor mALS;
        private SensorManager mSensorManager;
    
        …	
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mALS = mSensorManager.getDefaultSensor(Sensor.TYPE_LIGHT);
        …

    Code Example 8.Instantiation of an Ambient Light Sensor**

    Barometer

    Applications can use the atmospheric pressure sensor (barometer), described in Table 8, to calculate the altitude of the device’s current location.

    Table 8: The Atmosphere Pressure Sensor
    SensorTypeSensorEvent
    Data (lx)
    Description
    BarometerTYPE_PRESSUREvalues[0]The ambient air pressure in mbar

    Code Example 9 shows how to instantiate the barometer.

    …	
        private Sensor mBarometer;
        private SensorManager mSensorManager;
    
        …	
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mBarometer = mSensorManager.getDefaultSensor(Sensor.TYPE_PRESSURE);
        …

    Code Example 9. Instantiation of a barometer**

    Sensor Performance and Optimization Guidelines

    To use sensors in your applications, you should follow these best practices:

    • Always check the specific sensor’s availability before using it
      The Android platform does not require the inclusion or exclusion of a specific sensor on the device. Before using a sensor in your application, always first check to see if it is actually available.
    • Always unregister the sensor listeners
      If the activity that implements the sensor listener is becoming invisible, or the dialog is stopping, unregister the sensor listener. It can be done via the activity’s onPause() method, or in the dialog’s onStop() method. Otherwise, the sensor will continue acquiring data and as a result drain the battery.
    • Don’t block the onSensorChanged() method
      The onSensorChanged() method is frequently called by the system to report the sensor data. You should put as little logic inside this method as possible. Complicated calculations with the sensor data should be moved outside of this method.
    • Always test your sensor applications on real devices
      All sensors described in this section are hardware sensors. The Android Emulator may not be capable of simulating a particular sensor’s functions and performance.

    GPS and Location


    GPS (Global Positioning System) is a satellite-based system that provides accurate geo-location information around the world. GPS is available on many Android phones and tablets. In many perspectives GPS behaves like a position sensor. It can provide accurate location data for applications running on the device. On the Android platform, GPS is not directly managed by the sensor framework. Instead, the Android location service accesses and transfers GPS data to an application through the location listener callbacks.

    This section only discusses the GPS and location services from a hardware sensor point of view. The complete location strategies offered by Android 4.2 and Intel Atom processor-based Android phones and tablets is a much larger topic and is outside of the scope of this section.

    Android Location Services

    Using GPS is not the only way to obtain location information on an Android device. The system can also use Wi-Fi*, cellular networks, or other wireless networks to get the device’s current location. GPS and wireless networks (including Wi-Fi and cellular networks) act as “location providers” for Android location services. Table 9 lists the main classes and interfaces used to access Android location services.

    Table 9: The Android Platform Location Service
    NameTypeDescription
    LocationManagerClassUsed to access location services. Provides various methods for requesting periodic location updates for an application, or sending proximity alerts
    LocationProviderAbstract classThe abstract super class for location providers
    LocationClassUsed by the location providers to encapsulate geographical data
    LocationListenerInterfaceUsed to receive location notifications from the LocationManager

    Obtaining GPS Location Updates

    Similar to the mechanism of using the sensor framework to access sensor data, the application implements several callback methods defined in the LocationListener interface to receive GPS location updates. The LocationManager sends GPS update notifications to the application through these callbacks (the “Don’t call us, we’ll call you” rule).

    To access GPS location data in the application, you need to request the fine location access permission in your Android manifest file (Code Example 10).

    <manifest …>
    …
        <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
    …  
    </manifest>

    Code Example 10.Requesting the Fine Location Access Permission in the Manifest File**

    Code Example 11 shows how to get GPS updates and display the latitude and longitude coordinates on a dialog text view.

    package com.intel.deviceinfo;
    
    import android.app.Dialog;
    import android.content.Context;
    import android.location.Location;
    import android.location.LocationListener;
    import android.location.LocationManager;
    import android.os.Bundle;
    import android.widget.TextView;
    
    public class GpsDialog extends Dialog implements LocationListener {
        TextView mDataTxt;
        private LocationManager mLocationManager;
    	
        public GpsDialog(Context context) {
            super(context);
            mLocationManager = (LocationManager)context.getSystemService(Context.LOCATION_SERVICE);
        }
    
        @Override
        protected void onCreate(Bundle savedInstanceState) {
            super.onCreate(savedInstanceState);
    	       mDataTxt = (TextView) findViewById(R.id.sensorDataTxt);
              mDataTxt.setText("...");
    		
            setTitle("Gps Data");
        }
    	
        @Override
        protected void onStart() {
            super.onStart();
            mLocationManager.requestLocationUpdates(
                LocationManager.GPS_PROVIDER, 0, 0, this);
        }
    		
        @Override
        protected void onStop() {
            super.onStop();
            mLocationManager.removeUpdates(this);
        }
    
        @Override
        public void onStatusChanged(String provider, int status, 
            Bundle extras) {
        }
    
        @Override
        public void onProviderEnabled(String provider) {
        }
    
        @Override
        public void onProviderDisabled(String provider) {
        }
    
        @Override
        public void onLocationChanged(Location location) {
            StringBuilder dataStrBuilder = new StringBuilder();
            dataStrBuilder.append(String.format("Latitude: %.3f,   Logitude%.3fn", location.getLatitude(), location.getLongitude()));
            mDataTxt.setText(dataStrBuilder.toString());
    		
        }
    }

    Code Example 11. A dialog that Displays the GPS Location Data**

    GPS and Location Performance and Optimization Guidelines

    GPS provides the most accurate location information on the device. On the other hand, as a hardware feature, it consumes extra energy. It also takes time for the GPS to get the first location fix. Here are some guidelines you should follow when developing GPS and location-aware applications:

    • Consider all available location providers
      In addition to the GPS_PROVIDER, there is NETWORK_PROVIDER. If your applications only need coarse location data, you may consider using the NETWORK_PROVIDER.
    • Use the cached locations
      It takes time for the GPS to get the first location fix. When your application is waiting for the GPS to get an accurate location update, you can first use the locations provided by the LocationManager’s getlastKnownLocation() method to perform part of the work.
    • Minimize the frequency and duration of location update requests
      You should request the location update only when needed and promptly de-register from the location manager once you no longer need location updates.

    Summary


    The Android platform provides APIs for developers to access a device’s built-in sensors. These sensors are capable of providing raw data about the device’s current motion, position, and ambient environment conditions with high precision and accuracy. In developing sensor applications, you should follow the best practices to improve the performance and power efficiency.

    About the Author

        Miao Wei is a software engineer in the Intel Software and Services Group. He is currently working on the Intel® Atom™     processor scale-enabling projects.




    Copyright © 2013 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

    **This sample source code is released under the Intel Sample Source Code License Agreement

  • Developers
  • Android*
  • Android*
  • Intel® Atom™ Processors
  • Sensors
  • Phone
  • Tablet
  • URL
  • IdentityMine 公司的 Evan Lang 介绍基于一体机开发的空中曲棍球游戏

    Building Compelling Windows* Store Apps for Intel® Atom™ Processor-Based Tablet Devices

    $
    0
    0

    Pub Date : Nov 26 (to be removed)

    Downloads

    Building Compelling Windows* Store Apps for Intel® Atom™ Processor-Based Tablet Devices [PDF 403KB]

    Abstract

    This article looks at the various platform features available on the latest Intel® Atom™ processor Z3000 series-based tablets and explores ways to use these features to build compelling Windows* Store apps. This article also explores some of the new features and APIs available in Windows 8.1 to build these compelling apps.

    Overview

    Some of the new and improved features available on Intel Atom processor Z3000 series-based tablets include Camera Image Signal Processing (ISP) 2.0, NFC, Sensors, GPS, Intel® HD Graphics, Media capabilities, Intel® Wireless Display, and more. This article will explore some of these platform features, provide a brief overview of the capabilities, and discuss how you can implement these features in a Windows Store app.

    Camera Image Signal Processing 2.0

    The Intel Atom processor Z3000 series-based tablet platforms have a new and improved ISP for taking great action images and videos. Improvements include digital video stabilization, burst capture mode, zero shutter lag capture, and multi-camera image capture.

    Capturing Audio and Video

    Windows 8.1 includes a simple and easy to incorporate API for capturing videos and photos. The CameraCaptureUI class gives the end user a full camera experience, complete with customization of options and a crop interface after an image is captured. See the screenshot in Figure 1 for more details.


    Figure 1. CameraCaptureUI screenshot in Windows* 8.1

    The code to enable this feature is as simple as the following:

    CameraCaptureUI dialog = new CameraCaptureUI();
    StorageFile file = await dialog.CaptureFileAsync(CameraCaptureUIMode.Photo);

    Digital Video Stabilization

    New for Windows 8.1 and tablets with Intel Atom processor Z3000 series is the ability to enable video stabilization. This feature uses dedicated hardware to stabilize jitter or shakiness caused by hand motion while recording. To enable this feature, create a new MediaCapture object and add an effect as demonstrated in the code below:

    mediaCapture = new Windows.Media.Capture.MediaCapture();
    await mediaCapture.InitializeAsync(settings);
    await mediaCapture.AddEffectAsync(Windows.Media.Capture.MediaStreamType.VideoRecord, “Windows.Media.VideoEffects.VideoStabilization”, null );

    Dual Cameras

    Another new feature is the ability to simultaneously capture video or images from both the front and rear facing cameras. In the screenshot below, the front facing camera captures a headshot and the rear facing camera captures a landscape.

    The ability to snap pictures or record video from both cameras simultaneously creates some interesting use cases. A tablet with two cameras could capture video of multiple meeting participants at the same time for an online meeting. This could also be useful for adding narration while recording.

    Sensors

    Sensors available on Intel Atom processor-based tablets include GPS, NFC, accelerometer, gyrometer, magnetometer, and others. The article “Using Sensors and Location Data for Cutting-edge User Experiences in Mobile Applications” (http://software.intel.com/en-us/articles/using-sensors-and-location-data-for-cutting-edge-user-experiences-in-mobile-applications) takes an in-depth look at the APIs available in Windows Store apps and ways to improve on sensor experiences.

    The rest of this article expands on a new module for creating Geofencing featured apps and looks at use cases around NFC.

    Geofencing

    This first question to look at is, what is geofencing? The following image depicts a virtual perimeter around a real world geographic location:

    The red marker is the real world point of interest, and the green dot is the device reporting a GPS position. The blue perimeter is a virtual fence around this point of interest.

    In this image, as the device with GPS moves location and leaves the virtual fence, the application is notified and an action can be taken. This is also true for the reverse. If the device enters the virtual fence, the same application notification occurs.

    Geofencing itself is not new. What is new are the classes and modules available in Windows 8.1 for a great developer (and user) experience creating (and using) geofencing apps. With geofencing APIs come some new interesting use cases for apps. Some examples include location-based reminders, such as a receiving a reminder to pick up milk on the way home when you leave the geofence around the office. Geofences could be set up to send users alerts for upcoming train stops or notifications for when their children arrive or depart school. Apps could use geofences to assist with check-ins and check-outs from social web sites. Geolocation apps could include retail applications, where coupons are offered to users who are physically in your store. Additionally, coupons could be offered only when the user has spent a certain amount of time browsing in the store.

    To set up a geofence in code, the following example can be adapted for your purposes:

    // Get a geolocator object 
    geolocator = new Geolocator();
    position = geolocator.GetGeopositionAsync()
    
    // receive notifications of change events
    GeofenceMonitor.Current.GeofenceStateChanged += OnGeofenceStateChanged;
    
    // the geofence is a circular region of 50 meters
    Geocircle geocircle = new Geocircle(position, 50);
    
    TimeSpan dwellTime = new TimeSpan(0, 5, 0); // 5 minutes
    TimeSpan duration = new TimeSpan(2, 0, 0); // 2 hour duration
    
    geofence = new Geofence(fenceKey, geocircle, mask, singleUse, dwellTime, 	startTime, duration);
    GeofenceMonitor.Current.Geofences.Add(geofence);

    First, the app needs a position to set up the virtual fence around. This is done by using the Geolocator object. Next, an event handler is configured to be called when the geofence conditions are met. In this example a geocircle with a 50-meter radius is to be used as the virtual perimeter. The last step is to create the geofence and add it to the list of all geofences. The two time parameters specify how long the device’s position needs to be in or out of the geofence before triggering and how long the conditions should remain active.

    Near Field Communication (NFC)

    NFC is a wireless connection that differs from Bluetooth* or Wi-Fi* in a couple of ways. NFC is a quick, simple, automatic connection that requires little or no configuration by the end user. This is quite different from Bluetooth or Wi-Fi where connections require passwords and other information to establish a connection. In the case of NFC, proximity is used as the connection mechanism, and an NFC connection will only work when the devices are within about 4 cm.

    At a higher level, NFC usage can be divided into three use cases: acquiring information (e.g., read URI from NFC tag), exchanging information (e.g., send/receive photo), and connecting devices (e.g., tap device to configure Bluetooth or other connection configuration). These three categories together can enable a variety of use cases, and we will take a look at a couple of them below.

    For a closer look at NFC and its details, see the article “NFC Usage in Windows* Store Apps – a Healthcare App Case Study” (http://software.intel.com/en-us/articles/nfc-usage-in-windows-store-apps-a-healthcare-app-case-study).

    Protocol Activation

    Using NFC for acquiring information means that a device is brought near another device or passive NFC tag for reading its contents. It would be nice if the user has an application that knows how to handle the custom data of the NFC tag to go ahead and launch the app for the user. The process follows three steps:

    1. App registers the URI (e.g., mailto, http) in the application manifest
    2. User reads the NFC tag with URI
    3. App is launched and entire URI is passed in

    With the URI, the application can then pull information from it as follows:

    public partial class App
    {
       protected override void OnActivated(IActivatedEventArgs args)
       {
           if (args.Kind == ActivationKind.Protocol)
          {
    ProtocolActivatedEventArgs protocolArgs = args as ProtocolActivatedEventArgs;
            // URI is protocolArgs.Uri.AbsoluteUri
          }
        }
    }

    Connect Devices

    One of the most compelling use cases is to use NFC for configuring a more complicated connection such as Wi-Fi or Bluetooth. In this method a user could tap their device to NFC-enabled Bluetooth headphones and quickly set up an audio connection for listening to music. The following code is an example of how to set up a socket connection started with an NFC connection.

    // configure the type of socket connection to make
    PeerFinder.AllowInfrastructure = true;
    PeerFinder.AllowBluetooth= true;
    PeerFinder.AllowWiFiDirect = true;
    
    // Hook up an event handler that is called when a connection has been made
    PeerFinder.TriggeredConnectionStateChanged += OnConnectionStateChange;
    void OnConnectionStateChange(object sender, TriggeredConnectionStateChangedEventArgs eArgs)
    {
    if (eArgs.State == TriggeredConnectState.Completed)
    {  
         // use the socket to send/receive data
         networkingSocket = eArgs.Socket;
         ...
    }
    }

    The first few lines specify what type of new socket-based connection you want to initiate when an NFC connection is made. Next, it’s as simple as adding a callback and waiting for the user to take some action. I have simplified the example somewhat. For more details, see the article “Creating Multi-Player Experiences Using NFC and Wi-Fi* Direct On Intel® Atom™ Processor-Based Tablets” (http://software.intel.com/en-us/articles/creating-multi-player-experiences-using-nfc-and-wi-fi-direct-on-intel-atom-processor-based).

    Intel® Wireless Display (Intel® WiDi)

    Windows 8.1 includes support for Miracast. Intel® Wireless Display (Intel® WiDi) is compatible with the Miracast specification, and Windows 8.1 allows developers to take advantage of Intel WiDi in a Windows Store app. The main way to use Intel WiDi is in a dual-screen scenario. In this scenario, a single Windows Store app makes use of both the tablet device screen and an Intel WiDi-connected screen. The Intel WiDi screen does not have to be just a mirrored display of what is on the tablet. Instead, it can contain completely different content than the tablet display. One common use case for this scenario is to play a movie on an Intel WiDi screen connected to the living room television, and then use the tablet to extend the movie experience by displaying controls for video playback or contextual information about the movie.

    The following is a quick sample of the functions involved:

    if ( ProjectionManager.ProjectionDisplayAvailable )
    {
        // start projecting a view on secondary display
        ProjectionManager.StartProjectingAsync( secondaryViewId, ApplicationView.GetForCurrentView().Id);
        ...
    }

    Using ProjectionManager, you first figure out if a second display is available. If so, then pass in a view ID to start projecting on the second display. To learn more, see the reference material at http://msdn.microsoft.com/en-us/library/windows/apps/windows.ui.viewmanagement.projectionmanager.aspx.

    Summary

    The items discussed in this article are just some of the new features and APIs available on the Intel Atom processor Z3000 series-based tablets running Windows 8.1. Other platform features include Intel® HD Graphics, different sensors, and security features. You can find out about additional Windows 8.1 functionality in the article “Windows 8.1: New APIs and features for developers” (http://msdn.microsoft.com/en-us/library/windows/apps/bg182410.aspx).

    About the Author

    Nathan Totura is an application engineer in the Intel Software and Services Group. Currently working on the Intel® Atom™ processor-enabling team, he helps connect software developers with Intel® technology and resources. Primarily these technologies include tablets and handsets on the Android*, Windows* 8, and iOS* platforms.

    Connect with Nathan on Google+

    Intel, the Intel logo, and Atom are trademarks of Intel Corporation in the US and/or other countries.
    Copyright © 2013 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

  • Intel® Atom™ Z3000
  • Video Stabilization
  • Intel® WiDi
  • NFC
  • ULTRABOOK™
  • applications
  • Dual Camera
  • Developers
  • Microsoft Windows* 8
  • Development Tools
  • Microsoft Windows* 8 Style UI
  • Sensors
  • User Experience and Design
  • Tablet
  • URL
  • Viewing all 82 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>