Quantcast
Channel: Sensors
Viewing all 82 articles
Browse latest View live

Introduzione agli Ultrabook

$
0
0

Con l’arrivo di Windows 8 (e conseguentemente del Windows Store), stiamo assistendo ad un proliferarsi di applicazioni di terze parti sempre più complesse e accattivanti, in cui i produttori di pc giocano un ruolo fondamentale. Intel, in particolare, sta sfruttando in modo eccellente tutte le potenzialità del nuovo sistema operativo di casa Microsoft, mettendo a disposizione degli utenti un nuovo tipo di computer portatile, che non ha nulla a che vedere con i classici notebook che siamo abituati a vedere. Si tratta dei tanto discussi Ultrabook.

Leggeri, potentissimi (dotati di processore Intel), e provvisti con vari tipi di sensori, gli Ultrabook si contraddistinguono dai prodotti concorrenti e li sbaragliano su ogni linea. Per diventare un Ultrabook, un portatile deve soddisfare determinate caratteristiche fissate da Intel, tra cui la durata della batteria, lo spessore, ecc.

Sono previste tre fasi per gli Ultrabook, ognuna delle quali basata sulle architetture Intel: Sandy Bridge, Ivy Bridge e Haswell.

Prima fase (Q4 2011)

·         Assottigliamento: spessore inferiore a 21 mm (0,8 pollici)

·         Alleggerimento: peso minore di 1,4 kg (3,1 libbre)

·         Batteria di lunga durata: da 5 a 8 o più ore

·         Prezzo – sotto i 1.000 dollari (per il modello base)

·         Nessun lettore ottico

·         Utilizzo di SSD

·         Utilizzo CULV (17 W TDP) Intel Sandy Bridge

·         Core i5-2467M (1.6 GHz)

·         Core i5-2557M (1.7 GHz)

·         Core i7-2637M (1.7 GHz)

·         Core i7-2677M (1.8 GHz)

·         Utilizzo Intel's graphics sub-system HD 3000 (12 EUs)


Seconda fase (2012)

·         Utilizzo CULV Intel Ivy Bridge

·         incremento del 30% delle prestazioni nell'integrazione grafica rispetto a Sandy Bridge

·         incremento del 20% delle prestazioni della CPU rispetto a Sandy Bridge

·         USB 3.0, PCI Express 3.0


Terza fase (2013)

·         Utilizzo CULV Intel Haswell mobile processors

·         Nuovo sistema avanzato di risparmio energetico, metà potenza di consumo rispetto a inizio 2011 (Sandy Bridge)

 
Grazie ai nuovi dispositivi, tablet e Ultrabook lo sviluppatore ha così accesso ad un mondo di nuove e infinite possibilità, sfruttando i sensori. Infatti accelerometri e giroscopi, sensori GPS e di prossimità, sensori di luminosità, bussole magnetiche e NFC sono solo alcuni degli strumenti con cui le applicazioni del Windows Store possono interagire in maniera innovativa con l’utente finale. Il tutto sarà gestito dal Windows Sensor Framework, uno strato del sistema operativo incaricato di fare da tramite tra applicazioni e i livelli del sistema operativo coinvolti nella gestione dei sensori. Tale frame work, quindi, consentirà agli sviluppatori di gestire e individuare i vari sensori singolarmente. Questa gestione innovativa rappresenta senza ombra di dubbio un vantaggio, sia per lo sviluppatore (che sarà in grado di scrivere software e applicazioni di ultima generazione, accattivanti e complete), sia per l’utente finale (che interagirà in maniera attiva con il mondo esterno).

  • Developers
  • Partners
  • Professors
  • Students
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • HTML5
  • Windows*
  • Beginner
  • Sensors
  • Laptop
  • Tablet
  • URL

  • Sumerics Case Study

    $
    0
    0

    By Geoff Arnold

    Download Article

    Sumerics Case Study [PDF 1.03MB]

    Introduction

    Florian Rappl, a 28-year-old PhD student at the University of Regensburg in Germany, won the App Innovation Contest for creating Sumerics, an app that performs a complex analysis of Ultrabook™ device sensor data and displays the results in visually compelling 2D and 3D graphs. Built using existing frameworks and languages—specifically Windows* Presentation Foundation (WPF) and C#—Sumerics takes full advantage of the sensor and touch screen capabilities of Ultrabook devices running Microsoft Windows 8. While Rappl chose the well-established WPF for rendering the user interface, he relied on the relatively new open-source library MahApps.Metro to give Sumerics' UI its modern, touch-friendly look and feel.

    Sumerics is available on the Intel AppUp® center.


     Figure 1. A simple 3D plot rendered by Sumerics.

    About the Sumerics App

    Rappl designed Sumerics with two goals in mind: First, he wanted to present an intuitive touch-enabled user interface for visualizing complex data series gathered either from various static files, such as CSV, or from real-time dynamic data from Ultrabook device sensors. Second, he wanted to make it easy for users to directly access these data series as inputs in computational analysis. To enable this analysis, Rappl exposed all sensors as functions for displaying and analyzing data, a design decision that allows users to gather information about the current states of the Ultrabook device sensors for further exploration. Rappl said anyone who deals with data visualization might benefit from the app. For example, Sumerics could be used in an advanced course on acoustical physics to analyze words spoken into the Ultrabook device microphone through the use of Sumerics' fast Fourier transform algorithm, enabling users to obtain a visual representation of the frequencies for certain words by specific speakers.


     Figure 2. The Sumerics "sensors" tab, where instant plots for most sensors can be created. By default this tab is activated and gives instant information about sensors, without the need for users to type a command.

    Using the Ultrabook device keyboard, attached mouse, or other peripheral, a Sumerics user can input and edit data, similar to using MATLAB*, a high-level language and interactive environment for numerical computation, visualization, and programming. However, because everything is designed with touch in mind, the graphs that Sumerics displays can be rotated, translated, scaled, saved, and printed using on-screen touch commands. In this way, Sumerics takes advantage of the power and flexibility of Ultrabook devices, which Rappl said provides the killer combination of a tablet's portability and a laptop's high performance and ability to run standard software.

    Challenges Faced in Creating Sumerics

    During the process of creating Sumerics, Rappl faced four primary challenges.

    • Deciding whether to develop a Windows Store app or an app to run in Windows 8 Desktop
    • Deciding whether to build the app using HTML5 or WPF
    • Properly planning for touch and sensor implementation
    • Building a math parser and deciding which plotting libraries to use

    The following sections describe how Rappl addressed each of these challenges.

    Navigating Windows 8

    Windows 8 presents a choice to end users. They can navigate the Ultrabook device in Desktop mode, which is similar to navigating PCs and laptops running earlier versions of Windows. Or they can switch to one of several flat, laterally scrolling tiles using the modern, touch-oriented UI that is already familiar to smartphone and tablet users. Consequently, developers must decide how end users will eventually access their apps.

    Rappl chose to build Sumerics for the Windows 8 desktop, primarily because he already had experience building Windows desktop apps, particularly using C# with the WPF UI framework, which has been available for several years. Accordingly, many open-source libraries are available, including ones that make it easy to build touch-enabled UIs.

    One disadvantage Rappl found is that, as is, the app coded in C# with WPF for the desktop UI cannot be submitted to the Windows Store. Apps in the Windows Store are generally coded in JavaScript* for UIs built around HTML5 and CSS3, and Rappl said he would have used this approach if he wanted to make sure Sumerics was distributed on Windows Store. (For now it's available on the Intel AppUp® center). Rappl noted that it is possible to use the WinRT component library to write a JavaScript wrapper for a C#-based app, which can then be distributed through the Windows Store. However, at the time Sumerics was created, Rappl said that bugs were associated with this approach, and indeed all Windows Store apps built with C#/VB/C++ and XAML, in part because of difficulties in taking advantage of the WPF binding capabilities.

    Using existing tools and technologies, Rappl built a touch-enabled app that takes full advantage of Ultrabook device sensors and runs smoothly on Ultrabook devices running Windows 8, providing user access and navigation for Sumerics in the Desktop mode. Given the long and established legacy of the Windows desktop, Sumerics has a channel to reach a potentially large audience.

    HTML5 versus WPF

    For Rappl, a key part of navigating Windows 8 was deciding whether to build the app using HTML5 or WPF. He concluded that HTML5 undeniably has a bright future, which he expects to eventually be the de facto standard for building platform-independent apps. However, since he was already well-versed in WPF he build the UI around that graphical subsystem, which can be used to build compelling Windows 8 desktop apps that are touch- and sensor-enabled. Rappl said WPF introduces device-independent pixels, which offer a significant advantage compared to the alternative of the much older Windows Forms, in use since the early days of the Microsoft .NET Framework. He lauded the data-binding capabilities of WPF, which also fully utilize the concepts of attached properties and dependency properties.


     Figure 3:The subplot feature, which allow the user to view multiple plots from the same data series simultaneously.

    However, WPF has its own challenges. For example, like Windows Forms, the controls can be heavy. But because everything (other than images) is a vector and can run over DirectX*, the UI can be accelerated, thus minimizing this problem. The controls can also be completely customized, and developers can make full use of the ability to differentiate between the logical and visual tree. Plus, the 3D capabilities in WPF provide a modest productivity boost.

    As for HTML5 (used in combination with CSS3 and JavaScript), Rappl thinks the markup language will in all likelihood be central for apps headed for a cross-platform touch screen future, which arguably has already arrived. But issues remain. For example, it's frustrating, he said, that while HTML5 and C# are both cross-platform, the combination of the two is not. Attempting to work around this constraint using C#-to-JavaScript, source-to-source compilers can be confusing. And any move to JavaScript, a dynamic language, means sacrificing the speed and responsiveness associated with C#, a static language. So for Rappl, who wanted a maximally responsive app for his sophisticated users, C# with WPF was the way to go.

    Properly planning for touch and sensor implementation

    Sumerics was the first app Rappl developed for touch screens. Rappl's three main takeaways: First, for ease of use, make each button at least 40x40 pixels—large enough to be clearly visible and easy to push or tap. Second, use tooltips (the popup text enabled by placing the mouse cursor over the element) as a complementary feature; always work to ensure that using the program is obvious without having to read the text. Third, ensure the main features of the app are accessible even without a keyboard—a simple enough rule to adhere to when designing for tablets and smartphones, but more difficult to follow when building a touch-aware app, specifically one that will be accessed on the Windows 8 desktop on an Ultrabook device.

    Rappl said dealing with sensors turned out to a secondary part of his overall development work. This was because the main focus of Sumerics is data visualization of data series that can come from any source, be it an imported CSV file or an onboard accelerometer. Indeed, Sumerics works well even if it's used just to visualize data series imported from static files. Also, because it was relatively straightforward to expose sensor data as functions using the Windows Sensor and Location Platform, Rappl was able to spend most of his time working on how to best render compelling graphs and plots.


    Figure 4.The Sumerics'"interaction" tab, where users can type commands, get help, or view variables currently available in the workspace (shown on the right). These variables are used to store values from previous calculations. The tab presents an overview that gives the user information about not only which variables are available, but also what kind of value is stored in them, such as a scalar, matrix, or plot. A graphical representation of the value is given when the user touches the variable.

    Building a math parser and deciding which plotting libraries to use

    At the heart of Sumerics is Yet Another Math Parser (YAMP*), which takes a string of characters and transforms them into mathematical expressions. When Rappl saw the Windows* 8 & Ultrabook™ App Innovation Contest advertised on CodeProject, he decided to simultaneously build both the Sumerics app and a math parser, one that he hoped could take advantage of the features of C#. By releasing the YAMP code to the open-source community, Rappl tapped their expertise during the code review process central to the open-source world.

    Here's a high-level look at the Sumerics architecture, where green represents third-party libraries, orange is the YAMP external library, and blue represents the new libraries associated with Sumerics:

    Sumerics obviously relies heavily on YAMP, which requires a certain format for commands. For example, if a YAMP user wants to clear certain variables, he or she must enter the following command:

    clear("a", "b", "c")
    

    With Sumerics the command is simpler:

    clear a b c
    

    This works because Sumerics has a command parser as well. Rappl added the parser to make it as easy as possible for end users to call on YAMP functions. If the command parser finds a valid command, Sumerics executes it. Otherwise the whole expression is passed to YAMP, which then parses and interprets the expression.

    Rappl says writing the command parser was easy and required only an abstract base class. Below, a code snippet shows how he employed reflection to register the methods that the Sumerics command parser can use.

    public static void RegisterCommands()
    {
        var lib = Assembly.GetExecutingAssembly();
        var types = lib.GetTypes();
    
        foreach (var type in types)
        {
            if (type.IsAbstract)
               continue;
    
            if (type.IsSubclassOf(typeof(YCommand)))
            {
               var cmd = type.GetConstructor(Type.EmptyTypes).Invoke(null) as YCommand;
               commands.Add(cmd.Name.ToLower(), cmd);
            }
        }
    }
    

    A small subset of the classes that make some functions available from the commands parser is displayed below.

    The command parser works with an arbitrary number of arguments. The basic code is similar to that described by Rappl in his article about YAMP on CodeProject.

    To write the specific sensor functions in the YAMP plug-in, Rappl first started with an abstract base class, which he extended to the various sensors. Here's how Rappl implemented the acc() function:

    using System;
    using Windows.Devices.Sensors;
    
    namespace YAMP.Sensors
    {
        [Description("Provides access to the acceleration sensor of an Intel UltraBook™.")]
    	[Kind("Sensor")]
        public class AccFunction : SensorFunction
        {
            static Accelerometer sensor;
    
            static AccFunction()
            {
                try
                {
                    sensor = Accelerometer.GetDefault();
                }
                catch { }
            }
    
            protected override void InstallReadingChangedHandler()
            {
                if(sensor != null)
                    sensor.ReadingChanged += OnReadingChanged;
            }
    
            protected override void UninstallReadingChangedHandler()
            {
                if (sensor != null)
                    sensor.ReadingChanged -= OnReadingChanged;
            }
    
            void OnReadingChanged(Accelerometer sender, AccelerometerReadingChangedEventArgs args)
            {
                RaiseReadingChanged(args.Reading);
            }
    
            /// <summary>
            /// retrieves acceleration in (X,Y,Z)-direction in units of g
            /// </summary>
            /// <returns></returns>
            [Description("Retrieves acceleration in (X, Y, Z)-direction in units of g. Hence usually (no movement) the returned vector will be (0, 0, 1).")]
            [ExampleAttribute("acc()", "Returns a 3x1 matrix of accelerations in the x, y and z directions.")]
            public MatrixValue Function()
            {
                return new MatrixValue(Acceleration);
            }
    
            public static double[] Acceleration
            {
                get
                {
                    if (sensor == null)
                        return new double[3];
                    var acc = sensor.GetCurrentReading();
                    return new double[] { acc.AccelerationX, acc.AccelerationY, acc.AccelerationZ };
                }
            }
        }
    }
    

    Note how Rappl provided many static functions, which allows direct access from Sumerics (for live data of the sensors), without creating an explicit instance or having YAMP interpret a fixed expression. The sensor reading AccelerometerReading is included in a nested class. If there is no sensor data, a null value is returned, though YAMP always returns data—specifically scalars and matrices with correct dimensions but without any values (so every value is 0).

    Because Sumerics focuses on data visualization, it was critical to choose the appropriate libraries for displaying 2D and 3D plots. In the end the decision was between two open-source libraries: IronPlot and OxyPlot. IronPlot, which has the ability to render 3D plots, provides a suitable plotting package for IronPython (without using Mathplotlib, which would have added a dependency on Python and a dependency on the communication between C# and Python). OxyPlot is a platform-independent plotting package that includes a WPF implementation. OxyPlot, according to Rappl, is well documented, extensible, and full of great features, including the ability to annotate and track plots.

    For 3D plotting, Rappl created his own library, albeit one built around OxyPlot. This library allowed for the rendering of not only rectangles, lines, and more from a data series, but also a complete image. This rendering led to a demonstrable increase in performance and made it possible to plot heat maps and other complex graphics that display well on the various Ultrabook device screens available today. (Several Ultrabook devices have screen resolutions of up to 1600x900.)

    Helpful Resources

    Rappl read the Intel articles that focus on using Ultrabook device sensors, including articles available at http://software.intel.com/en-us/articles/ultrabook-touch-and-sensor-resources. He also reviewed specifications for touch-aware apps (not just touch-enabled apps) and consulted external resources, including those on CodeProject. Additionally, Rappl relied on the Intel AppUp® Developer certification tool for creating simple MSI setup files. More detail about Rappl's work on Sumerics is described in an article he wrote for CodeProject: http://www.codeproject.com/Articles/472698/Sumerics. Rappl has also posted several short videos of himself using Sumerics, including one showing how to plot data captured from the Ultrabook device inclinometer, gyrometer, and compass, on his YouTube* channel: http://www.youtube.com/user/FlorianRappl.

    Things to Consider

    Rappl considers HTML5 to be the future for touch screen UIs and, with the experience he gained from creating Sumerics, stated that if he were starting over he would likely use HTML5 instead of WPF, particularly if he wanted to distribute his app on the Windows Store.

    Conclusion

    Sumerics takes data imported from static files or gathered from Ultrabook's many onboard sensors and produces compelling 2D and 3D graphs that can be easily manipulated via the touch screen. This sort of advanced data visualization might prove useful in various environments, from engineering test labs at established companies to high school physics classrooms. (In a basic course covering classical mechanics, students could set up an experiment to let an Ultrabook fall from a reasonably tall building and then use Sumerics to review data from the accelerometer to see if the device achieved weightlessness.) Despite the relative newness of the Ultrabook device hardware and Windows 8 operating system, Rappl built the award-winning app in a matter of months, working around other commitments, mostly using well-established languages and frameworks (C# and WPF) and freely available libraries.

    Rappl concludes that Ultrabook devices running Windows 8 represent a relatively new category of devices, one marked by the portability and ease-of-use of touch screen tablets along with the power of a full-featured laptop. But his experience building a compelling app that evokes comparisons to industry-leading MATLAB-based products demonstrates that developers well-versed in established Microsoft technologies, particularly C# and WPF, can build a compelling touch screen app that takes full advantage of the Ultrabook device's many features—including its various sensors—while staying in the familiar territory of the Microsoft desktop, albeit one that's now touch-enabled.

    About Author

    Ostensibly Florian Rappl's main focus is writing his dissertation—the weighty working title is "Feynman Diagram Sampling for Quantum Chromodynamics"—in the field of theoretical particle physics. However, he's also active in computing in a way that far exceeds the work of most physics grad students. Currently he teaches graduate level courses in C# programming and in designing Web applications with HTML5, CSS3 and JavaScript. He's involved in several high-performance computing projects at the University of Regensburg, including one attempting to build a supercomputer with Intel® Many Integrated Core Architecture chips. He's an MVP at CodeProject, one of the more thriving developer communities on the Web. It turns out that winning the Windows* 8 & Ultrabook™ App Innovation Contest is just the latest in a long string of accomplishments for Rappl, though in addition to the acclaim, this accomplishment also netted him more than USD 30,000 in prize money plus a new Ultrabook device.

    Portions of this document are used with permission and copyright 2012 by CodeProject. Intel does not make any representations or warranties whatsoever regarding quality, reliability, functionality, or compatibility of third-party vendors and their devices.

    All products, dates, and plans are based on current expectations and subject to change without notice. Intel, the Intel logo, Intel AppUp, Intel Atom, the Intel Inside logo, and Ultrabook, are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

    Copyright © 2013. Intel Corporation. All rights reserved.

  • ultrabook
  • Windows store
  • sensor
  • Windows Presentation Foundation
  • WPF
  • sumerics
  • UI
  • Windows desktop
  • touch
  • touch-enabled
  • Developers
  • Microsoft Windows* 8
  • Microsoft Windows* 8 Style UI
  • Sensors
  • URL
  • Everything for the Ultrabook™ developer

    $
    0
    0

    So one of the first questions raised by application developers, is typically how can I get started with app development on Ultrabook™ - well here are some excellent pointers.

    Some months ago CodeProject, instigated an AppInnovation Contest (AIC) for Ultrabooks, while the contest is now over, several quality apps were submitted with some serious awards. As part of the judging, there was a clear emphasis on both showing off hardware sensor capabilities and coming up with something innovative, participants were requested to write up an article that teaches others what was  learned about writing applications for the Ultrabook™,  i.e. how was the app developed with the tools and sensors available on the platform.

    For those not familiar with the CodeProject AIC and the quality of these articles, I would like to provide a short flavor with links to some of the more interesting write-ups.

    How does one develop an Ultrabook™ app? Well Abhishek Nandy wrote an award winning article: Ultrabook Development My Way on how to get started. Like what is an Ultrabook™, what sensors are available, what steps do you need to follow to get your app posted on the Windows Store  or the Intel AppUp site  and what tools do you need to get started developing Windows* apps on Ultrabook™ - all great information for the new Ultrabook™ developer.

    Now with that out of the way, here’s a quick look at two specific apps and their features:

    ·         Some game developers utilized the lesser known multi-platform App Game Kit (AGK) that is available in C++ and a proprietary Tier 1 or Tier 2 language that caters from beginners to expert C++ developers. The AGK exports compiled code to a number of platforms such as Windows*, iOS*, Android*, etc.  In addition to becoming familiar with the AGK, Ultrabook™ app game developers must also understand how to best utilize the currently available eight sensors on the UltraBook™ (Near Field Communication, Geolocation, Compass, Gyrometer, Inclinometer, Orientation, Light Sensor & Multitouch), and also possibly develop simulators for these sensors to validate their app. The article written by Steve Vink, Lets just Play is a great intro on developing gaming apps for the Ultrabook™ using AGK.

    ·         Mapping  - How can an app take full advantage of the larger & faster storage model (SSD) on the Ultrabook™ (vs. mobile handhelds) as well as available sensors that can truly put these platforms to test that’s usually reserved for games. Well Thomas Willwacher’s article LocalStreetLMaps  has a solution, all by using the subset of the 300GB OpenStreetMap database it is possible to develop a map viewer with similar features to Google*/Bing* maps while running on an Ubuntu* server in VMware*.

    I encourage readers to take a look at some of the apps and excellent write-ups on the CodeProject sites:

    http://www.codeproject.com/KB/ultrabooks/

    http://www.codeproject.com/KB/ultrabooks/#App+Innovation+Contest+Entries

    http://www.codeproject.com/competitions/611/Ultrabook-Article-competition

     

    Icon Image: 

    Symbio and Sesame Workshop at GDC 2013

    Case Study: Finalhit Incorporates C++ and Cocos2d-x in Windows* Desktop Game Development

    $
    0
    0

    By Karen Marcus

    Case Study: Finalhit Incorporates C++ and Cocos2d-x in Windows* Desktop Game Development [PDF 987.4KB]

    In 2012, 30 developers participated in the Europe, Middle East, and Africa–based Ultrabook™ Experience Software Challenge. Intel held the challenge to foster developer creativity in enhancing the user experience with Ultrabook devices. Participants had six weeks to develop original applications that integrated touch, gesture, and voice functionality. Finalhit Ltd. was a participant and won third place for its application, Live Ball, which the team developed specifically for the challenge. The game enables players to use the keyboard and mouse, touch, or device tilting to keep a beach ball from touching the ground (see Figure 1). Ivan Petrovic, managing director, characterizes the game as “easy but addictive.”

    Finalhit previously developed Ultra Screen Saver Maker, a leading screen saver creation tool. Live Ball is the first game the company has developed. The two applications have some common elements, including C++ as a programming language, use of the Windows* platform, entertainment value, and a keen awareness of customers’ use of mobile devices and the new breed of hybrid laptop–tablets.


    Figure 1. Live Ball main menu and play screen

    Initial Development Steps

    For the challenge, the first task for the team was to find a 2-D game engine platform. Their requirements were based on the short deadline for the challenge, challenge guidelines, and the opportunity to develop for iOS* and Android*. The 2-D game engine had to:

    • Be simple to use and easy to learn
    • Include a physics engine
    • Be extendable to add support for touch and sensors
    • Support cross-platform functionality, including mobile phones

    After extensive research, the team chose Cocos2d-x, which supports all desktop and mobile platforms, is an open source project, and is easily extendable. Petrovic notes, “Five hundred-plus million downloads of Cocos2d-x–based games worldwide sounded trustworthy, too.”

    The Cocos2d-x game engine’s features satisfied several important requirements:

    • It compiles to native code for all platforms.
    • It is based on C++.
    • Games can be developed and fully debugged in Microsoft* Visual Studio* and Windows.
    • 95% of the code is portable.
    • It is open source and therefore fully extensible.
    • It supports Windows 8 Desktop and soon Windows 8 RT implementation.

    Petrovic says, “We are going to release a Windows 8 Store app when Cocos2d-x fully supports it. All game platforms face this obstacle, because Microsoft doesn’t support OpenGL* on Windows RT, just Microsoft DirectX*. As of March 27, 2013, Microsoft China ported Cocos2d-x to Windows Phone 8; this is a major step toward implementing Windows RT support.” The team encourages other developers to adopt Cocos2d-x, because it supports platforms with a large market share.

    The team also needed to choose the right programming language. Petrovic explains, “HTML5 is known to have performance issues on the current generation of mobile phones, so we wanted to explore other options. Lua-based engines are popular, but to extend them, we needed to develop things in C++. So, we figured C++ was the natural choice for a task like this.”

    Development Process for Windows 8

    Live Ball was designed as a Windows Desktop application. The team selected this option based on their familiarity with programming Desktop applications as well as the limited timeframe, as more time would have been needed for Windows Store app development.

    Even with Desktop development taking less time, the biggest challenge for the team was the short deadline. Petrovic notes, “We had six weeks to develop a game from scratch, with touch and sensor support, with no previous experience programming games or touch and sensors.”

    Though the application was designed for Windows Desktop, the team wanted to make it work on both Windows-based Ultrabook devices and mobile devices. Petrovic comments, “Existing game engines have just begun a Windows Store implementation, and they are far from stable.”

    In the development process, the team did not use any Windows 8–specific features. Petrovic says, “We are happy to say that Live Ball works on Windows 8, Windows 7, Windows Vista*, and Windows XP; Apple iPhone*, iPad*, and iPod touch*; and Android.” He adds, “When designing games, you don’t have to follow any specific operating system user interface (UI) guidelines. Games usually run in full screen or in windowed mode, without much need for standard UI controls. In contrast, we realized that if we developed a Windows RT app only, we would need to sacrifice support for earlier Windows versions and mobile platforms. So we also developed the Windows Desktop app.”

    Development Process for Ultrabook Platforms

    With Live Ball, users can touch or tilt Ultrabook devices to run left or right to hit the ball. Petrovic observes, “Touch and sensors bring the mobile experience to Windows running on Ultrabook devices. We believe every game should offer as many ways to interact with it as possible. So, besides the mouse and keyboard, we’ve added touch and accelerometer support.”

    Touch

    Live Ball implements single tap to browse through menus and touch events as well as long touch and drag to move the character during the game. Petrovic explains how these touch actions were selected: “Single tap is a natural choice for browsing menus, because it is similar to mouse clicks (see Figure 2). Long touch is something most users are used to from playing games on mobile devices. Moving the character using long touch is more convenient than using the mouse in the game.”


    Figure 2. When users tap or click a menu item, it grows larger, as with Start Game and High scores above.

    Petrovic notes that it was important to properly identify the touch capabilities the operating system and device supported. He says, “We’ve extended the Cocos2d-x platform to support touch, so it properly registers and handles Windows WM_TOUCH messages. We’ve implemented full WM_TOUCH message logic and applied it to the Cocos2d-x concept. Cocos2d-x officially includes our code now.”

    In addition, says Petrovic, “We’ve carefully implemented the Windows Touch application programming interface (API), including a check if the computer supports touch at all, while maintaining backward compatibility with previous Windows versions. Maintaining backward compatibility with previous Windows versions consists of checking whether each Windows Touch API function exists in user32.dll.”

    “When touch was present,” adds Petrovic, “we stopped processing false mouse move/button messages and properly handled the WM_TOUCH message so Cocos2d-x would understand it. When a user taps the screen, Windows generates mouse messages for legacy app support. To avoid duplicate notifications, it was necessary to see whether it was the real mouse click or touch event by checking (GetMessageExtraInfo() & MOUSEEVENTF_FROMTOUCH) == MOUSEEVENTF_FROMTOUCH.”

    Sensors

    At the start of the challenge, the team considered the available Ultrabook sensors to determine which were the most suitable for the game; they found that the accelerometer was a logical choice. Live Ball uses the accelerometer to move the character during the game.

    Sensor development was challenging, because, says Petrovic, “There is a lack of real-world examples for sensor implementation in C++.” He continues, “Although native support for sensors exists in Windows 7 and later, it seems that prior to Windows 8, sensors were mostly neglected in Windows. This was a time-consuming task, so we decided not to submit such code for inclusion in Cocos2d-x but to stay close to source. We’ve implemented the SENSOR_TYPE_ACCELEROMETER_3D sensor, so users can tilt the device to move the character left or right to hit the balls.” He adds, “It is interesting to see users perform this on a laptop, as the action was exclusive to mobile devices before Ultrabook devices came along.”

    The Ultrabook Experience Software Challenge

    Finalhit had several key opportunities in this challenge:

    • Creating a game for the first time
    • Finding the right 2-D game engine platform
    • Finding the right programming language
    • Completing the application within the allotted time
    • Incorporating sensor implementation using C++ with few real-world examples

    In developing for Ultrabook devices, the team was impressed with their fast boot time and their ability to resume operation from hibernation mode in just seconds. Petrovic says, “Ultrabook works like a charm; no additional performance optimization was necessary. It has great multimedia platform capabilities, and everything is built in for both fun and business. The integrated GPU is fast enough to process all our game needs without removing any sprite or background details. It is definitely the best laptop we’ve ever used.” He adds, “It still looks pretty unreal on a Windows platform. The great features that previously existed only in the mobile world—touch and accelerometer—provide a completely different experience for users playing our game.”

    Finalhit views Live Ball as a proof of concept from which they can dive farther into game development. Petrovic notes that the company launched Live Ball based on its experience with the challenge, using all the features the team had already coded. Petrovic states, “We are in the process of finalizing an Intel® Perceptual Computing software development kit implementation, so in addition to keyboard, mouse, touch, and accelerometer, players will be able to play the game by using gestures and voice commands with Creative Interactive Gesture Camera. With this version, we will participate in the Intel® Perceptual Computing Challenge Phase 2.”

    Summary

    Finalhit Ltd. participated and won third place in the Intel® 2012 Ultrabook Experience Software Challenge. The company created the Live Ball game specifically for this challenge. Initial development steps included finding the right 2-D game engine platform and programming language. For the 2-D game engine platform, the team settled on Cocos2d-x, because it has the ability to support all desktop and mobile platforms, is an open source platform, and is easily extendible. The team selected C++ as the programming language. The game was developed as a Windows Desktop application based on the team’s familiarity with programming Desktop applications and the limited timeframe. The team incorporated touch, accelerometer, and speaker use into the game’s functionality. Users can tilt the Ultrabook device to move the game character and touch the screen to perform certain actions. The team had to extend the Cocos2d-x platform to support touch to properly register Windows WM_TOUCH messages and maintain backward compatibility with previous Windows versions. Sensor development proved challenging because of a lack of real-world examples for sensor implementation using C++. Through the development process, the team was impressed with several aspects of Ultrabook devices, including the fast boot time; recovery from hibernation mode; great multimedia platform capabilities; and fast, integrated CPU.

    Company

    Finalhit is an independent software development company founded in 2001 in London, United Kingdom. It strives to create extremely easy-to-use software without compromising innovation and cutting-edge technology.

    The company has been programming in C++ for Windows for more than 10 years. Its flagship product, Ultra Screen Saver Maker, which creates screen savers, has had more than 500,000 downloads worldwide and has been used by thousands of companies, including Accenture, Adidas, Alcatel, BMW, DHL, IBM, Ingenico, Mercedes-Benz, Motorola, Nestlé, NextGen, Novartis, and Oracle.

    About the Author

    Karen Marcus, M.A., is an award-winning technology marketing writer with 16 years of experience. She has developed case studies, brochures, white papers, data sheets, solution briefs, articles, website copy, video scripts, and other documents for such companies as Intel, IBM, Samsung, HP, Amazon Web Services, Microsoft, and EMC. Karen is familiar with a variety of current technologies, including cloud computing, IT outsourcing, enterprise computing, operating systems, application development, digital signage, and personal computing.

    Intel, the Intel logo, and Ultrabook are trademarks of Intel Corporation in the US and/or other countries.

    Copyright © 2013 Intel Corporation. All rights reserved.

    *Other names and brands may be claimed as the property of others.

  • ultrabook
  • Windows* 8
  • Windows store
  • Apps
  • Windows desktop
  • APIs
  • Ultrabook™ devices
  • touch
  • touch-enable
  • game development
  • sensor
  • Microsoft Windows* 8
  • Game Development
  • Microsoft Windows* 8 Desktop
  • Microsoft Windows* 8 Style UI
  • Sensors
  • Touch Interfaces
  • User Experience and Design
  • URL
  • Re-imagining Apps for Ultrabook™ (Part 5): Device Motion

    $
    0
    0

    Back to the Software Business Network

    The fifth part of our Re-imagining Apps for Ultrabook™ video series is now available. In it, I’ll provide an overview of device motion and walk through a few ways we can take advantage of this set of capabilities in the desktop apps we create.

    Device motion is made possible by a combination of always-on sensors (typically an accelerometer, a magnetometer, and a gyroscope) that tell us how a computer is moving through the space around it. The ability of these sensors to provide precise information about the movement of a device opens up new design possibilities for applications. From adjusting the user interface based on orientation changes to using three dimensional motion as input, to combining device motion with location detection, video cameras, and light sensor capabilities, there's no shortage of interesting interface designs made possible by Device Motion:

    Device Motion Resources

    In the video I mention a number of resources that are listed below for quick access.

    About the Series

    The Re-imagining Apps for Ultrabook™ video series introduces new ways of thinking about the design and development of desktop applications and offers practical design advice to help developers take advantage of new opportunities in Intel's Ultrabook devices.

    About Your Host

    Luke Wroblewski is an internationally recognized digital product leader who has designed or contributed to software used by more than 700 million people worldwide. He was co-founder and CPO of Bagcheck (acquired by Twitter in 2011), chief design architect at Yahoo! Inc., and is the author of three popular Web design books including his most recent: Mobile First. Luke is a contracted vendor with Intel; opinions expressed are his own and do not necessarily represent Intel's position on any issue.

  • location
  • user experience
  • software marketing
  • Design
  • interface design
  • interaction
  • UI
  • geolocation
  • business and marketing
  • reimagining
  • Icon Image: 

  • Tutorial
  • Re-imagining Apps for Ultrabook™ (Part 5): Device Motion

    May 7th, 2013 AppLab for Windows*8 on mobile platforms

    $
    0
    0

    Well, just last week I returned from one of our first Application Development Labs (AppLab) devoted to development on the Intel Platform on Windows* 8 for touch, sensor, and HTML5.  The event took place in Los Angeles (LA) California in the United States.   The presenters were drawn from Intel - including two of our brightest Technical Marketing Engineers - Meghana Rao and Gayathri Murali- and yours truly, Paul Steinberg, as host and emcee.

    Meghan RAo presenting the Intel App Lab

    Also in attendance was UX/UI specialist Dr. Jorge Toro from Integrated Computer Systems (ICS) who brought information both on the new Windows* 8 store design model as well as practical information on how best to implement this in modern applications. Finally, we were joined by Garth Wolfendale, MicrosoftPremier Senior Consultant who broght in-depth guidance on Windows* RT APIs.

    The AppLab day was a long one, we spent more than six hours going over the content.  These are developer events and we try to include as much sample code and application examples as possible.  We also spent quite a bit of time on UX/UI best practices led by the experts from ICS and Microsoft. We covered new models for distribution, covering certification and windows store side-loading. Finally we wrapped up with an quick overview of our Intel Developer Zone program resources to design, build, test, and market application in target segments. See the agenda below for full details.

    Intel AppLabs have an industry vertical focus usually Education, Health Care, Financial Servics, Retail and the like.  The LA AppLab had an audience drawn from a number of industry sectors, but we used examples drawn from education for the most part.  We recently held an AppLab in New York which had a strong Financial Services orientation. 

    Just in case you are wondering, we have two flavors of AppLabs.  The one I am describing here is Windows focused but we have similar offering for Android* developers as well.

    So how do you access an AppLab? Typicaly, AppLabs are offered to developers at companies with dedicated Intel account coverage. AppLabs are held worldwide, so if this matches your company profile, talk to your Intel account representative. We also do offer regional AppLabs from time to time - the LA one was a good example.  Checkout our Applab page to see if soemething is scheduled near you.

    If neither of these work, never fear.  I have attached a subset of the AppLab content to the bottom of this blog.  Feel free to download it and use it to your advantage.  Also, we will be taking much of the AppLab content and broadcasting it via live interactive webinars. These will be availbale worldwide.  We are workig on content and schedule now, so stay tuned for that this summer.

    In the meantime - leaving a comment on this blog is a great way to get in touch with me if you have feedback on AppLabs or any other issue. I'd love to hear form you.

    Typical Windows AppLab Schedule 

    Session

    Details

    Intel Platform Overview

    Intro to Windows* 8

    • Platform capabilities of 3rd Generation Intel® Core™ processor based Ultrabook™ convertibles and Clover Trail based tablet platforms

    • introduction to the new Windows* 8 capabilities.

    Designing applications for Windows* 8 Desktop & Windows 8 Store
    • Case studies highlighting design considerations and recommendations for Windows 8 Desktop and Windows 8 Store apps.

    Optimizing Windows 8 Desktop apps with touch
    • Enable desktop apps for touch using Windows* 8

    • Code walkthrough -- and comparison metrics to enable touch  on Windows* 7 and Windows* 8.

    Development Techniques for Windows* 8 Store
    •  Coding guidance on WinRT APIs

    Windows  8, HTML5* & Flash*
    • Developing HTML5 apps for the Windows Store. 

    • HTML5 tools

    Development models to enhance productivity in enterprise apps
    • Code walkthrough -- platform portable class libraries that can be leveraged across Windows 8 Desktop and Store apps. 

    Enabling Windows 8 apps with sensors
    • Windows 8 recommended sensors, APIs to access sensors with
    • code walk through for Accelerometer, NFC and GPS.
    Distribution, Windows App Certification & Deployment techniques
    • Distribute Windows 8 Desktop and Store apps
    • Windows 8 certification requirements and deployment options
    • Side-loading for enterprise applications.
    Intel(r) Developer Zone (IDZ) Resources
    • Development Resources
    • Business Resources
    • Community Resources

    Other Resources

  • AppLab
  • windows 8 ui
  • windows 8 Store
  • sensors
  • intel ultrabook
  • Touch and Sensors
  • Touch API
  • Icon Image: 

    Attachments: 

    http://software.intel.com/sites/default/files/blog/392565/applab-2013.pdf
    http://software.intel.com/sites/default/files/blog/392565/designing-applications-for-the-windows-8-desktop.pdf
  • Event

  • Windows* 8 Sensors Sample Application – A Ball in Hole Game

    $
    0
    0

    Abstract


    The purpose of this sample application is to provide a step-by-step guide to quickly set up and start using sensor data via Windows* 8 Sensors application programming interfaces (APIs) on both Windows Desktop and Store application platforms. This guide has two sections. The first, SensorDesktopSample, is a Windows 8 Desktop application, and the second, SensorSampleApp, is a Windows Store app. Starting with Visual Studio* 2012 project templates, both applications implement a simple Ball in Hole game with the same logic for easy comparison. Refer to this link for more information on the differences between Windows Desktop and Windows Store Application Sensor APIs. The ball moves around the screen based on accelerometer readings, and the game ends once the ball is in the hole.

    Download article


    Windows* 8 Sensors Sample Application – A Ball in Hole Game Whitepaper

    Download Source Code


    ball-in-hole-sensor-desktop-sample.zip(25.02 MB)
    ball-in-hole-sensor-store-sample.zip(56.41 MB)

    License


    Intel sample sources are provided to users under the Intel Sample Source Code License Agreement

  • WindowsCodeSample
  • Microsoft Windows* 8
  • Microsoft Windows* 8 Desktop
  • Microsoft Windows* 8 Style UI
  • Sensors
  • URL
  • Perceptual Challenge Brazil - Perguntas frequentes

    $
    0
    0

    1.Posso enviar um app na fase 2 caso minha idéia não seja um finalista da fase 1?

    R: Você só poderá participar da Fase 2 se for finalista da fase 1 ou se já possuir a câmera.

    2.O que é uma demostração da aplicação?

    R: Na fase 2 será solicitado um protótipo, ou versão demo da aplicação desenvolvida. Essa demonstração devem representar a aplicação em todas as suas funcionalidades o mais perto possível da produção finalizada. No entanto, a demonstração pode não ter todas as características necessárias para um produto totalmente funcional.

    3.Quantas Idéias (Fase1) ou Aplicativos (Fase 2) eu posso enviar?

    R: Você poderá submeter quantas idéias achar necessário durante a primeira fase. Durante a segunda fase será permitido apenas uma aplicação por participante.

    4.Quantas Idéias podem ser selecionadas como finalistas da primeira fase?

    R: Apenas uma idéia poderá ser selecionada.

    5.Que pode participar dessa competição?

    R: O concurso é aberto para qualquer desenvolvedor maior de 18 anos e que more dentro do Brasil. Funcionários da Intel não são elegíveis.

    6.Posso trabalhar com uma equipe?

    R: Sim, absolutamente. Nós encorajamos tanto o desenvolvimento individual quanto o feito por equipes para participar da Intel Perceptual Computing Challenge. Para inscrições de equipe, vocês devem registrar a empresa ou escolher uma pessoa como representante para ser registrada e caso o time ganhe, o indivíduo é responsável pela distribuição de prêmios ganhos para os membros da equipe.

    7.Posso enviar um aplicativo existente?

    R: Sim, você pode enviar um aplicativo existente adicionando os recursos do Intel Perceptual Computing SDK, desde que seja titular ou possua licença para utilizar a propriedade intelectual.

    8.Posso enviar um aplicativo previamente submetidos a um concurso Intel?

    R: Sim, você pode enviar aplicativos previamente submetidos em outros concursos da Intel, no entanto, você terá de enviar na sua submissão como você vai completar ou alterar o aplicativo para adicionar os recursos de Perceptual Computing.

    9.Posso entrar em nome da minha empresa?

    Sim. Por favor indique na ficha de inscrição que a inscrição está sendo feito em nome da empresa. Os prêmios serão concedidos à empresa, quando essa opção for indicada.

    10.Como são julgadas as entradas?

    R: Todas as submissões serão julgadas após o prazo final de submissões ter acontecido. As inscrições serão julgadas de acordo com os critérios definidos no regulamento.

    11.A Intel vai tomar posse da minha idéia ou aplicação?

    R: Não. A propriedade permanece com o participante e/ou empresa, no entanto a Intel reserva o direito de demonstrar e promover a idéia e / ou demonstração do aplicativo.

    12.E minhas informações pessoais serão privadas?

    R: Sim. Intel leva a privacidade a sério e todas as informações pessoais serão mantidas em sigilo.

    13.Onde posso encontrar o SDK?

    R: O SDK está disponível para download em: www.intel.com/software/perceptual.

    14.Eu preciso usar o SDK na minha submissão?

    R: Sim, todas as aplicações devem promover a utilização do SDK.

    15.Aonde eu vou, se eu tiver outras dúvidas?

    R: Por favor, visite nosso fórum.

    Icon Image: 

  • Contest
  • Event
  • Detecting Slate/Clamshell Mode & Screen Orientation in Convertible PC

    $
    0
    0

    Downloads


    Download Detecting Slate/Clamshell Mode & Screen Orientation in Convertible PC [PDF 574KB]
    Download DockingDemo3.zip[37 KB]

    Executive Summary


    This project demonstrates how to detect slate vs. clamshell mode as well as simple orientation detection on Windows* 8 desktop mode. The application is a tray application in the notification area and is based on win32 and ATL. The tray application also works when the machine is running in New Windows 8 UI mode. It uses windows message and sensor API notification mechanism and doesn’t need polling. However, the app requires appropriate device drivers and it was found that many current OEM platforms don’t have the necessary drivers for slate / clamshell mode detection. Simple orientation sensor works on all the tested platforms.

    System Requirements


    System requirements for slate / clamshell mode detection are as follows :

    1. Slate / clamshell mode indicator device driver (Compatible ID PNP0C60.)
    2. Docking mode indicator device driver (Compatible ID PNP0C70.)
    3. Go to Device Manager -> Human Interface Devices -> GPIO Buttons Driver -> Details -> Compatible Ids. If you find PNP0C60, that’s the driver. Without this driver, slate mode detection doesn’t work.
    4. For classical docking mode detection, you need this driver.

    System requirements for orientation :

    1. Simple Device Orientation Sensor.
    2. Present in all tested convertible PCs.

    Application Overview


    • Compile and run the application, and it will create a tray icon. For testing purpose, customize “Notification Area Icons” so that DockingDemo.exe’s behavior is to “Show icon and notifications” in the lower right corner of the screen.
    • Move the mouse over the icon and it shows the current status.

    • Right click on the icon for further menus – About, Save Log…, and Exit. Save Log will let you save all the events to a specified file. When you save the events to the log, it clears the events in the memory.
    • Rotate and switch back and forth between the slate / clamshell mode or rotate the platform. The tray icon will pop up a balloon to notify the change.

    Slate / Clamshell Mode Detection


    OS broadcasts WM_SETTINGCHANGE message to the windows when it detects slate mode change with the string “ConvertibleSlateMode” in lParam. In case of docking mode change, it broadcasts the same message with the string “SystemDockMode.” WinProc in DockingDemo.cpp handles this message. The API to query the actual status is GetSystemMetrics. This method works when the system is running New Windows 8 UI mode.

     
    BOOL bSlateMode = (GetSystemMetrics(SM_CONVERTIBLESLATEMODE) == 0); 
    BOOL bDocked = (GetSystemMetrics(SM_SYSTEMDOCKED) != 0); 
    

    Screen Orientation Detection


    In desktop environment, OS broadcasts WM_DISPLAYCHANGE message to the windows when it detects orientation changes. lParam’s low word is the width and high word is the height of the new orientation.

    There are two problems with this approach :

    • This approach only detects landscape and portrait mode. There is no distinction between landscape vs. landscape flipped and portrait vs. portrait flipped.
    • WM_DISPLAYCHANGE simply doesn’t work when it is running in New Windows 8 UI mode.

    Fortunately, Microsoft* provides COM interfaces to directly access the various sensors and there are various white papers about how to use it. Some of the references are listed here.

    In this project, SimpleOrientationSensor class implements the infrastructure to access the orientation sensor, and OrientationEvents class is sub-classed from ISensorEvents to register the callbacks for the orientation change events. Since the Sensor APIs use callback mechanism, the user application doesn’t have to poll the events. This approach works when the system is running in New Windows 8 UI mode.

    The relationship between slate mode and rotation needs to be carefully thought out. Rotation may be enabled / disabled automatically depending on the slate / clamshell mode. To ensure the proper behavior, a combination of GetAutoRotationState API and rotation sensor is used for this sample, i.e., discard rotation event notification when autorotation is NOT enabled. In that case, use EnumDisplaySettings to get the current orientation in NotifyOrientationChange function as shown in the code snippet below.

    Intel, the Intel logo and Xeon are trademarks of Intel Corporation in the U.S. and other countries. *Other names and brands may be claimed as the property of others.
    Copyright© 2013 Intel Corporation. All rights reserved.

    License
    Intel sample sources are provided to users under the Intel Sample Source Code License Agreement.

  • ultrabook
  • Windows *8
  • desktop
  • Tablet
  • applications
  • slate mode
  • clamshell mode
  • orientation detection
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Intermediate
  • Microsoft Windows* 8 Desktop
  • Microsoft Windows* 8 Style UI
  • Sensors
  • Touch Interfaces
  • Laptop
  • Tablet
  • Desktop
  • Protected Attachments: 

    AttachmentSize
    Downloaddockingdemo3.zip36.96 KB
  • URL
  • Intel Collaborates with Universities - Build a 3D Model of Your Living Room

    $
    0
    0

    Hi! I am the director of the University Collaborative Research in Intel Labs. I am starting up a series of blogs featuring the latest highlights of our collaborative research with universities around the world. I am not going to be doing “computer research for dummies,” but I will be writing about the latest exciting research topics in an easy-to-understand form. No jargon? Ok, maybe just a little. But, no matter what your level of tolerance for engineer-speak, I think you will find these topics interesting, informative, and thought provoking. And after reading about them you will also understand why the topics are important and how they may affect hardware, software, and computing products in the near future.

    In this blog, I will tell you how Intel Labs is conducting collaborative research with several leading universities focusing on novel depth cameras, called RBG-D cameras, and what their future development possibilities might be. From 3D mapping and modeling to object and activity recognition, RBG-D cameras show promise for a wide variety of applications that researchers are just now scratching the surface of. Several RBG-D cameras have been introduced to the market since late 2010, including Kinect, PrimeSense, Creative* Interactive Gesture Camera and others. These have generated a huge amount of interest in software developer communities. This groundswell of interest has spawned a whole range of hacks and demos, especially in the gaming industry.

    What is less known are the fundamental changes that RGB-D cameras are bringing to the broad research field of visual perception—using cameras as a generic sensing device to perceive the world in many more of its facets than just body gestures. Liefeng Bo and Anthony LaMarca from Intel Labs provided me with the background information on their research for this blog post. Liefeng and Anthony, in collaboration with University of Washington, Carnegie Mellon University, Stanford University, Cornell, UC Irvine, UC Berkeley, Saarland University and others have carried out a series of research projects to demonstrate that, by providing synchronized color and depth data at high frame rates, RGB-D cameras lead to breakthroughs and vast advances in visual perception, such as in 3D modeling, Object Recognition, and Activity Tracking. These advances are enabling novel perception-based applications that could change how we live our daily lives in the near future.

    Research in Large-scale Three-Dimensional Mapping and Modeling with RGB-D:

    The world is 3D and building 3D digital models is the dream of researchers and developers from many fields such as medical, computer-aided design and computer graphics. Three-dimensional modeling is a challenging problem. While 3D scanners exist for tabletop objects, modeling a large environment at the scale of rooms and buildings is a much harder problem and has been actively researched using either expensive laser rangefinders, like the Velodyne HDL-64E, or elaborate vision techniques, e.g., in Photo Tourism.

    With an RGB-D camera, the 3D modeling problem becomes much easier and much more accessible to developers and consumers. At the Intel Science and Technology Center on Pervasive Computing hosted at University of Washington we have built prototype systems that allow a user to freely hold and move a RGB-D camera through a large indoor environment, such as a multi-room floor measuring 40 meters in length, and build a 3D model that is accurate in both geometry and color. The system runs in near real-time, merges a continuous stream of RGB-D data into a consistent 3D model, and allows user interactions on-the-spot such as checking partial results and rewinding to recover from mapping errors. Our RGB-D mapping work demonstrates that it is feasible to build a portable scanning device for large-scale 3D modeling and mapping in the near future.

    What use could such a system offer? There is a long list of potential applications once we have easy access to 3D modeling capabilities. One example is home remodeling. For quite a while, people have wanted a visualization tool to show the effects of remodeling—moving walls, changing paint color and lighting, and arranging furniture before making costly mistakes. A related example is virtual furniture shopping, where instead of going to a furniture store, people download 3D models of furniture and “try it out” in their actual home setting. There are also plenty of opportunities for virtual reality, where accurate 3D modeling and 3D localization can deliver convincing experiences. Just as today we take for granted the availability of GPS coordinates and 2D maps outdoors, in the foreseeable future we could have applications that make indoor 3D maps and locations a reality.

    Interactive 3D modeling of indoor environments, video, published at Ubicomp 2011:

    Research papers on RGB-D mapping:

    http://istc-pc-test-media.cs.washington.edu/papers/ubicomp-2011-interactive-mapping.pdf

    http://homes.cs.washington.edu/~peter/papers/3d-mapping-ijrr-12-preprint.pdf

    Robust Recognition of Everyday Objects with RGB-D:

    For any system to intelligently operate in the world, it needs to understand the semantics of the world, such as objects, people, and activities. Object recognition is a fundamental problem that has been at the center stage of computer vision research. While (frontal) face detection and recognition are quickly becoming practical and being deployed in cameras and laptops, generic object recognition remains challenging. To recognize a coffee mug may seem easy, but it is difficult to build a robust application to handle all possible mugs, with viewpoint and light changes, especially when a mug, unlike a face, does not have a distinctive appearance pattern.

    RGB-D cameras again make a fundamental difference in making object recognition robust as well as efficient. In our research, we have developed discriminative features for both color and depth data from an RGB-D camera, and used them as the basis to go way beyond the previous state of the art of recognition. We have evaluated our algorithms on a large-scale RGB-D dataset that covers 300 household objects viewed from different angles, and shown that we do much better than previous approaches, achieving ~90% accuracy for both object category recognition (i.e., is this a mug?) and instance recognition (i.e., is this Kevin’s mug?). In addition to classifying objects, RGB-D data also makes it much easier to extract multiple objects from complex scenes.

    What are the uses of object recognition? In collaboration with Human-Computer Interaction (HCI) researchers, we have demonstrated an interesting scenario of object recognition in the case of OASIS (object-aware situated interactive systems). We have developed a system that “brings to life Lego toys” by identifying objects, e.g., a dragon and house, and their orientations, and using a projector to overlay interesting animations associated with the objects, e.g., a dragon breathing fire. Using our robust RGB-D algorithm as the underlying recognition engine, the Lego OASIS system has been successfully demoed on many occasions such as CES (Consumer Electronics Show) 2011. At the Intel Science and Technology Center on Embedded Computing hosted at Carnegie Mellon University, we have developed a robot that scans the shelves of a retail facility and can identify misplaced merchandise and build a planogram (products mapping) of the shop. We believe this is only the tip of the iceberg. Once we can reliably recognize generic objects, developers can create many applications such as monitoring elder care activities and assisting cooking in smart kitchens.

    Lego OASIS Video:

    Research papers on RGB-D object recognition:

    http://joydeepb.com/Publications/icra2012_kinectLocalization.pdf

    http://www.ri.cmu.edu/pub_files/2011/6/2011%20-%20Micelli,%20Strabala,%20Srinivasa%20-%20Perception%20and%20Control%20Challenges%20for%20Effective%20Human-Robot%20Handoffs.pdf

    http://homes.cs.washington.edu/~lfb/paper/icra11a.pdf

    http://istc-pc-test-media.cs.washington.edu/papers/feature-learning-iser-12.pdf

    Fine-Grained Activity Recognition with RGB-D:

    Most recently, at the Intel Science and Technology Center on Pervasive Computing, we have started studying the problem of fined-grained activity recognition such as trying to use an RGB-D camera to understand every step in a human activity. To use cooking as an example, we want to track the hand locations and actions, the use of utensils, and the transfer of ingredients throughout a recipe. While previous approaches have used instrumentations, such as RFIDs on objects and accelerometers on hands, we show that it is feasible to do fine-grained activity recognition using only an overhead RGB-D camera, as shown in the following video:

    http://istc-pc-test-media.cs.washington.edu/images/ubicomp2012_supp.wmv

    By using mainly the depth data, our system reliably tracks the moving hands, with the active objects in them, as well as the inactive objects on the table. Objects on the table are identified by their appearance using both color and depth. Actions, such as scooping and mixing, are identified mainly using the hand trajectories. A recipe puts a high-level constraint on the set of plausible actions and their sequences. Altogether, with enough training data, everything that occurs during cooking may be recognized in real-time, including all the objects used, all the actions done to them, and the resulting state changes, e.g., as things are mixed or chopped.

    Fine-grained activity recognition has great potential for many applications. Smart kitchens are an example, where we envision that a system could keep track of the cooking process, count the number of spoons of sugar that are added, issue warnings if one overcooks things, and provide suggestions if needed when working with a new recipe. Assembling furniture from IKEA is a related example where a smart system can “read” the instructions and offer assistance. Assembling Lego models is another such scenario. In general, being able to understand human actions and the objects involved is key to enabling seamless interactions between humans and automated systems.

    If you want to know more, here is a good research paper on fined-grained activity recognition using RGB-D cameras: http://istc-pc-test-media.cs.washington.edu/papers/ubicomp2012.pdf.

    That’s it. I hope you found this as fascinating as I did. Join me here each month. Let me know how I am doing and I will try to keep it interesting.

  • 3D Camera
  • research
  • university
  • Intel Labs
  • object recognition
  • Icon Image: 

  • Technical Article
  • Zombie Studios Talks about the 4th Generation Intel® Core™ Processor (Haswell)

    $
    0
    0
    English
    Zombie Studios Talks about the 4th Generation Intel® Core™ Processor (Haswell)

    Zombie Studios talks about the benefits of the 4th Generation Intel® Core™ Processor (Haswell), including faster memory, longer battery life, superior graphical fidelity, the ability to reach a wider audience through the use of tablets, touch and Ultrabooks™, and the way that touch is changing the entire gaming experience. For more information on Haswell  For more information on Zombie Studios 

  • News
  • Developers
  • Microsoft Windows* 8
  • HTML5
  • Windows*
  • Sensors
  • Touch Interfaces
  • Laptop
  • Desktop
  • Haswell
  • Intel 4th Generation Core Processor
  • Intel Core
  • Zombie Studios
  • zombie
  • gaming
  • sensors
  • touch
  • power
  • ultrabook
  • user experience
  • intel software tools
  • Case Study: Sesame Factory Engages Sensor Functionality on Ultrabook™ Systems for an Enhanced Diary Application

    $
    0
    0

    By Karen Marcus

    Downloads


    Case Study: Sesame Factory Engages Sensor Functionality on Ultrabook™ Systems for an Enhanced Diary Application [PDF 968.93KB]

    To encourage developer innovation in enabling a more immersive user experience with Ultrabook™ devices, Intel held the EMEA-based Ultrabook Experience Software Challenge in 2012. The challenge was held over 6 weeks and hosted 30 participants from 11 countries. Participants developed original applications that integrated touch, gesture, and voice functionality. Judging criteria were as follows:

    • Functionality. Does the application work quickly and effectively, without any issues?
    • Creativity. Does the application represent an innovative usage model?
    • Commercial potential. How useful is the application for the mass market?
    • Design. Is the application simple to understand and easy to use?
    • Fun factor. How positive is the emotional response to the application?
    • Stability. Is the application fast and simple, without glitches?

    Sesame Factory won second-place with Day to Day, a diary application in which users can document their day-to-day experiences. In addition to text, users can include photos as well as their current mood, the weather, and location information for each entry (see Figure 1). The application, including the graphics, was developed specifically for this challenge.


    Figure 1.Day to Day diary entry

    Product


    The idea for Day to Day came from the Sesame Factory development team. Ercan Erciyes, co-founder, explains, “We like recording our days, taking notes, and reading them years later. It’s like looking at pictures and noticing how everything was different. This process offers the ability to reflect on what our priorities were and what really mattered back then.”

    Day to Day is the team’s first Microsoft Windows*-based application. Previously, they focused on embedded software development in C and web-based applications. The team most recently created a web-based platform that presents originally crafted how-to videos. This application enables web and mobile users to view step-by-step, instructional, how-to videos for various topics.

    Throughout the design process, the team wanted to include a lot of features in the application, but they also wanted to keep the interface as simple as possible. Erciyes says, “Initially, we wanted to include location selection from a map, photo import, cloud syncing, video import, and more, but time was limited and we had to prioritize our ideas. At this stage, Ultrabook features helped us shorten the development time because we could use the global positioning system (GPS) instead of manual location selection and the embedded camera instead of a photo import function. When the initial feature set was fixed, we prepared wireframes and started the design and development.”

    During the design process, the team spent considerable time trying to imagine the easiest ways for users to use the application. They carefully considered the buttons and the application canvas (see Figure 2). Erciyes notes, “To keep the user interface (UI) as simple as possible, we had to design the dashboard first to be understandable and easy to navigate, and then good-looking. We took the pictures of the UI elements ourselves and gathered opinions to determine what each image evoked for people.”


    Figure 2.Day to Day application home page

    When choosing the programming language to use, the team had three choices: C#, C++, and HTML5+JavaScript*. Erciyes describes the decision-making process: “We listed our priorities and deep-dove into the Windows 8 programming documentation to understand how each module and function could be implemented in each language. The team’s familiarity with HTML5+JavaScript and the usability of the Windows 8 application programming interfaces (APIs) were factors in our decision to use HTML5+JavaScript.”

    Development Process for Windows 8


    Day to Day is a Windows Store app. The biggest consideration in deciding whether to program for desktop or Windows Store was the number of users they could reach. “In addition,” says Erciyes, “usability and rapid development, as advantages of HTML5, had an impact on our decision. The availability of the development libraries when using HTML5 met our needs for development. For the software development kit, we preferred the built-in support of Microsoft Visual Studio* Express for including Windows 8-specific libraries and debugging.”

    Sesame Factory specializes in development environments based on HTML5, Cascading Style Sheet version 3 (CSS3), JavaScript, Python*, Java*, C, and Ruby, so the team had no previous Windows development experience prior to the challenge. Erciyes says, “Getting to know the Windows development environment took some time and effort. As the deadline of the challenge got closer, it became the biggest problem. The documentation support from both Intel and Microsoft helped us a lot in resolving this issue.”

    Development Process for the Ultrabook Platform


    Enabling touch, GPS, camera, and accelerometer sensors helped the team provide an enhanced user experience for the Ultrabook platform.

    Touch

    From the time the team discovered the native support for touch that Ultrabook provides, they knew touch would be a key component of Day to Day’s functionality. Erciyes says, “We knew that support for touch would be one of the most important features, unlike with a traditional PC application. We spent a lot of time on the UI and working out how we could provide the best user experience. With Day to Day, users can navigate through screens simply by touching the Ultrabook screen.”

    Erciyes notes that the team’s intention was to design UI elements in such a way that every touch gesture provided flawless operation. He explains: “When working on touch, the main focus is enabling users to use the application with ease and when desirable, but it’s also important to keep the application canvas attractive and usable.”

    From a coding perspective, the team did not implement additional functionality for touch recognition. Erciyes says, “The hardware and the operating system worked perfectly for us, so we had to deal only with the user experience/UI design.”

    Designing the application in a way that allowed both touchscreen and traditional keyboard-mouse interactivity showed the team that offering the touch feature adds great value to the application. Erciyes comments, “We were familiar with the benefits of touchscreen functionality for easing people’s interaction with machines. However, we were not familiar with touchscreen functionality on a traditional laptop device. We could clearly see that this everyday technology integrated on a laptop computer would change the user experience. While working on the touch component, the team learned that adding even a simple sensor feature could add incredible value to the application.”

    Sensors

    Day to Day supports Ultrabook sensors, as follows:

    • Accelerometer. The UI changes its view based on the device orientation; the modular design works well in both portrait and landscape view. So, users can change the orientation of their devices and continue to create and navigate their entries according to the position of the device.
    • GPS. Diary entries are saved with the user’s location and current weather conditions. This feature provides users the ability to perform personal analytics, such as whether their mood is affected by location or weather, or the number of different places that they have been (see Figure 3).
    • Camera. Users can add pictures and videos to diary entries.


    Figure 3.Series of Day to Day diary entries

    The team determined which sensors to use based on those they thought would create an enhanced experience within the application. Erciyes comments, “Adding sensor functionality not only enriched the user experience but also added value to it. These capabilities enabled us to include geolocation, positioning responsiveness, and photo and video capabilities to the application. The built-in GPS sensors in the Ultrabook platform helped us to detect the user location and the weather conditions to include with each diary entry. So, users get the ability to browse their entries based on location. The camera device increases interactivity and creates an engaging user experience. Diary entries that include photos, videos, and geolocation information offer an enhanced user experience over traditional PC applications.”

    Challenges and Opportunities


    For the 2012 Ultrabook Experience Software Challenge, EMEA-based software developers and ISVs (independent software vendors) were invited to submit their ideas for original software applications that leverage the latest Ultrabook functionality, including touch, gesture, and voice recognition. The purpose of the Challenge was to encourage innovation and creativity for a more spontaneous and intuitive user experience with Ultrabook devices. Thirty participants were selected, with nominees from 11 countries: the United Kingdom, Spain, Italy, Germany, the Netherlands, Russia, Romania, Israel, France, Greece, and Malta. Each participant received an Ultrabook software development platform and had six weeks to finish the application. The panel of judges included Intel engineering, marketing, and retail representatives.

    The Sesame Factory team’s biggest challenge was understanding the Windows programming environment. Erciyes notes, “One of the key problems we faced was with asynchronous programming—writing functions in response to a user’s actions. We had no previous experience with this aspect. However, after reading documentation and forums, we could easily implement the code.”

    Erciyes adds, “During the development process, it’s inevitable that you will face problems. When you do, the first thing is to check your code, then debug it; if you still cannot solve the problem, you reread related documentation and consult Intel® Developer Zone forums to find an answer. This was our process, and the community support answers and proper API documentation were helpful for us.” In particular, the team found the following web sites useful:

    The team also made some interesting comments on several Ultrabook features.

    • Touch.“When we used an Ultrabook with the touch feature for the first time, we had a ‘wow’ moment. We found ourselves touching the screen a lot. Especially when scrolling and other gestures where needed, we noticed that it was more responsive than the touchpad.”
    • CPU performance.“With this feature, we thought that users could switch their computers on quickly and create a diary entry about something that just happened.”
    • Long battery life.“To preserve battery life, the most common behavior is to decrease the backlight of the LCD. This action automatically limits the user experience and enthusiasm. With the Ultrabook platform, this scenario is less likely to happen compared with traditional PCs.”
    • Portability.“Compared with traditional laptops, the Ultrabook chassis is more elegant, stylish, and light. We just loved carrying the Ultrabook around.”

    Future Development


    In terms of next steps, the team sees a big opportunity to develop additional applications for Windows 8. Erciyes says, “Compared with other application platforms, design and programming can be implemented relatively easier. Adding new features and keeping Day to Day updated based on user requests are our first priorities. Meanwhile, we are developing new applications to enrich our portfolio with easy-to-use and user-oriented applications.”

    The team has additional features planned for future versions of the application:

    • Calendar browsing. Provide an overview of users’ memories from a specific date
    • Map view. Show entries on a map and give users a birds-eye view of places they’ve been
    • Mood view. Display the overall mood for a selected period of time, giving users a unique perspective to analyze their mood
    • Cloud syncing. Store the entries securely in the cloud, so users can access them from different locations and devices
    • Export. Enable users to export their entries and create good-looking PDFs that they can print

    Summary


    For the EMEA-based Ultrabook Experience Software Challenge, the developer Sesame Factory won second-place with its application, Day to Day, a diary application in which users can record the events of each day. The application enables users to include supplemental information, such as photos, location, weather information, and a mood indicator. As the Sesame Factory development team’s previous work has used C and Python, Day to Day is their first Windows-based effort. For this project, they used HTML5+JavaScript as their programming language. The team made good use of Ultrabook sensors, with touch, accelerometer, GPS, and camera functionality, to enrich the user experience. The team’s biggest challenge was understanding the Windows operating environment, and they found good documentation from Intel and Microsoft to help. They hope to continue improving Day to Day as well as develop new Windows-based applications.

    Company


    Sesame Factory is a start-up company founded by Ercan Erciyes (@ererciyes) , Semih Hazar (@shazar), and Engin Subaslar (@esubaslar). In 2011, the company founded Nasil TV (www.nasil.tv), a web-based video application that presents originally crafted how-to videos. The company was acquired in early 2013 by Mynet, the most prominent Internet portal in Turkey.

    About the Author


    Karen Marcus, M.A., is an award-winning technology marketing writer who has 16 years of experience. She has developed case studies, brochures, white papers, data sheets, solution briefs, articles, website copy, video scripts, and other documents for such companies as Intel, IBM, Samsung, HP, Amazon* Web Services, Microsoft, and EMC. Karen is familiar with a variety of current technologies, including cloud computing, IT outsourcing, enterprise computing, operating systems, application development, digital signage, and personal computing.

     

     


     

    Intel, the Intel logo, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.
    Copyright © 2013 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

  • ultrabook
  • Windows *8
  • store
  • Tablet
  • Apps
  • sensors
  • Developers
  • Microsoft Windows* 8
  • Microsoft Windows* 8 Style UI
  • Sensors
  • Laptop
  • Tablet
  • Desktop
  • URL
  • 超极本和平板电脑 Windows* 8 传感器开发指南

    $
    0
    0

    内容简介

    本指南为开发人员提供了面向 Desktop和Windows* 8商店应用的 Microsoft Windows* 8 传感器应用编程接口 (API) 的概述并重点介绍了 Windows 8 Desktop 模式中可用的传感器功能。 我们对可以创建交互应用的 API 进行了总结,包括采用Windows 8 的加速器、磁力计和陀螺仪等常见传感器。

    内容

    Windows 8 的编程选择

    开发人员在 Win8 上对传感器进行编程时具有多种 API 选择。 图 1 的左侧显示了新的支持触摸的应用环境,称之为“ Windows* 8商店应用”。Windows* 8商店应用唯一可以使用的 API 库是名为 WinRT 的全新 API 库。 WinRT 传感器 API 是整个 WinRT 库的一部分。 如欲了解更多详细信息,请参阅: http://msdn.microsoft.com/en-us/library/windows/apps/windows.devices.sensors.aspx

    右侧显示的是传统的 Win Forms 或 MFC 风格应用,因为它们在 Desktop Windows Manager 环境中运行,所以称之为“Desktop 应用”。 Desktop 应用可以使用本机 Win32/COM API 或 .NET 样式 API。

    在这两种情形中,这些 API 都通过一个名为 Windows Sensor Framework 的 Windows 中间件组件。 Windows Sensor Framework 定义了传感器对象模型。 不同的 API 以略有不同的方式“绑定”至相应的对象模型。


    图 1: Windows 8 中 商店应用 和 Desktop 传感器架构

    关于 Desktop 和 Windows* 8商店应用开发的不同之处将会在本文的稍后部分介绍。 为了简单起见,我们将只考虑 Desktop 应用开发。 如欲获取 Windows* 8商店应用开发的相关信息,请访问:http://msdn.microsoft.com/library/windows/apps/br211369

    传感器

    传感器的类型很多,但是我们感兴趣的是 Windows 8 需要的传感器,即加速器、陀螺仪、环境光传感器、指南针和 GPS。 Windows 8 通过对象导向的抽象来表现物理传感器。 为了操控传感器,程序员使用 API 与对象进行交互。

    您可能已经注意到下图(图 2)中显示的对象要比实际硬件多。 Windows 通过利用多个物理传感器的信息,定义了某些“逻辑传感器”对象。 这称之为“Sensor Fusion”。


    图 2: Windows 8 上支持的各种传感器

    Sensor Fusion

    物理传感器芯片具有一些固有的自然限制。 例如:

    • 加速器测量线性加速,它是对联合的相对运动和地球重力进行测量。 如果您想要了解电脑的倾角,则您必须进行一些数学运算。
    • 磁力计用于测量磁场的强度,会显示地球磁北极的位置。

    这些测量都受固有偏移问题制约,可以使用陀螺仪中的原始数据进行校正。 这两项测量(扩展)取决于电脑与地球水平面的倾斜度。

    如果您确实希望电脑的磁向与地球真正的北极一致(磁北极处于不同的位置,会随着时间的改变而移动),则需要对其进行校正。

    Sensor Fusion(图 3)正在获取多个物理传感器(尤其是加速器、陀螺仪和磁力计)的原始数据、执行数学运算校正自然传感器限制、计算更适合人类使用的数据以及将这些数据以逻辑传感器抽象形式显示出来。 应用开发人员必须实施必要的转换,以将物理传感器数据转换为抽象传感器数据。 如果您的系统设计具有一个SensorHub,融合操作将发生在微控制器固件内。 如果您的系统设计中没有 SensorHub,则融合操作必须在 IHV 和/或 OEM 提供的一个或多个设备驱动程序中完成。


    图 3: 通过组合来自多个传感器的输出进行传感器融合

    识别传感器

    如要操控传感器,您需要一个系统进行识别并指向它。 Windows Sensor Framework 定义了划分传感器的若干类别。 它还定义了若干特定的传感器类型。 表 1 列出了一些适用于您 Desktop 应用的传感器。

    表 1: 传感器类型与类别

    “All”
    BiometricElectricalEnvironmentalLightLocationMechanicalMotionOrientationScanner
    Human PresenceCapacitanceAtmospheric PressureAmbient LightBroadcastBoolean SwitchAccelerometer 1DCompass 1DBarcode
    Human Proximity*CurrentHumidityGps*Boolean Switch ArrayAccelerometer 2DCompass 2DRfid
    TouchElectrical PowerTemperatureStaticForceAccelerometer 3DCompass 3D
    InductanceWind DirectionMultivalue SwitchGyrometer 1DDevice Orientation*
    Potentio-meterWind SpeedPressureGyrometer 2DDistance 1D
    ResistanceStrainGyrometer 3DDistance 2D
    VoltageWeightMotion DetectorDistance 3D
    SpeedometerInclinometer 1D
    Inclinometer 2D
    Inclinometer 3D*

    Windows 必需的传感器类型以粗体*显示:

    • 加速器、陀螺仪、指南针和环境光是必需的“真正/物理”传感器
    • 设备定向和倾角计是必需的“虚拟/融合”传感器(注意:指南针还包括融合增强/倾斜补偿数据)
    • 如果您有一个 WWAN 广播,则 GPS 是必须的;否则 GPS 为可选
    • Human Proximity 是必需列表中的常见选项,但是现在并不是必需的。

    表 1 中所显示的类别和类型的名称都以人类可读的形式显示。 但是,在编程时,您将需要了解每种类型传感器的编程常量。 所有这些常量实际上仅仅是名为 GUID(全球唯一 ID)的编号。 以下的表 2 中是一些传感器类别和类型的样本、面向 Win32/COM 和 .NET 的常量名称以及他们基本的 GUID 值。

    表 2: 一些常见传感器的常量和唯一的全球唯一 ID (GUID)。

    标识符常量 (Win32/COM)常量 (.NET)GUID
    Category “All”SENSOR_CATEGORY_ALLSensorCategories.SensorCategoryAll{C317C286-C468-4288-9975-D4C4587C442C}
    Category BiometricSENSOR_CATEGORY_BIOMETRICSensorCategories.SensorCategoryBiometric{CA19690F-A2C7-477D-A99E-99EC6E2B5648}
    Category ElectricalSENSOR_CATEGORY_ELECTRICALSensorCategories.SensorCategoryElectrical{FB73FCD8-FC4A-483C-AC58-27B691C6BEFF}
    Category EnvironmentalSENSOR_CATEGORY_ENVIRONMENTALSensorCategories.SensorCategoryEnvironmental{323439AA-7F66-492B-BA0C-73E9AA0A65D5}
    Category LightSENSOR_CATEGORY_LIGHTSensorCategories.SensorCategoryLight{17A665C0-9063-4216-B202-5C7A255E18CE}
    Category LocationSENSOR_CATEGORY_LOCATIONSensorCategories.SensorCategoryLocation{BFA794E4-F964-4FDB-90F6-51056BFE4B44}
    Category MechanicalSENSOR_CATEGORY_MECHANICALSensorCategories.SensorCategoryMechanical{8D131D68-8EF7-4656-80B5-CCCBD93791C5}
    Category MotionSENSOR_CATEGORY_MOTIONSensorCategories.SensorCategoryMotion{CD09DAF1-3B2E-4C3D-B598-B5E5FF93FD46}
    Category OrientationSENSOR_CATEGORY_ORIENTATIONSensorCategories.SensorCategoryOrientation{9E6C04B6-96FE-4954-B726-68682A473F69}
    Category ScannerSENSOR_CATEGORY_SCANNERSensorCategories.SensorCategoryScanner{B000E77E-F5B5-420F-815D-0270ª726F270}
    Type HumanProximitySENSOR_TYPE_HUMAN_PROXIMITYSensorTypes.SensorTypeHumanProximity{5220DAE9-3179-4430-9F90-06266D2A34DE}
    Type AmbientLightSENSOR_TYPE_AMBIENT_LIGHTSensorTypes.SensorTypeAmbientLight{97F115C8-599A-4153-8894-D2D12899918A}
    Type GpsSENSOR_TYPE_LOCATION_GPSSensorTypes.SensorTypeLocationGps{{ED4CA589-327A-4FF9-A560-91DA4B48275E}
    Type Accelerometer3DSENSOR_TYPE_ACCELEROMETER_3DSensorTypes.SensorTypeAccelerometer3D{C2FB0F5F-E2D2-4C78-BCD0-352A9582819D}
    Type Gyrometer3DSENSOR_TYPE_GYROMETER_3DSensorTypes.SensorTypeGyrometer3D{09485F5A-759E-42C2-BD4B-A349B75C8643}
    Type Compass3DSENSOR_TYPE_COMPASS_3DSensorTypes.SensorTypeCompass3D{76B5CE0D-17DD-414D-93A1-E127F40BDF6E}
    Type Compass3DSENSOR_TYPE_COMPASS_3DSensorTypes.SensorTypeCompass3D{76B5CE0D-17DD-414D-93A1-E127F40BDF6E}
    Type DeviceOrientationSENSOR_TYPE_DEVICE_ORIENTATIONSensorTypes.SensorTypeDeviceOrientation{CDB5D8F7-3CFD-41C8-8542-CCE622CF5D6E}
    Type Inclinometer3DSENSOR_TYPE_INCLINOMETER_3DSensorTypes.SensorTypeInclinometer3D{B84919FB-EA85-4976-8444-6F6F5C6D31DB}

    这些都是最常用的 GUID--您还可以开发更多。 最初,您可能认为 GUID 无聊而且单调乏味,但是使用它们的一个最大原因就是: 可扩展性。 因为 API 不关注实际的传感器名称(它们仅传输 GUID),所以厂商可以为“增值”传感器创建新 GUID。

    生成新的 GUID

    微软在 Visual Studio* 中提供了一个可供任何人生成新 GUID 的工具。 图 4 显示了 Visual Studio 关于此操作的截图。 所有厂商必须要做的就是发布它们,这样无需更改 Microsoft API 或任意操作系统代码即可看到新功能了。


    图 4: 为增值传感器定义新 GUID

    使用传感器管理器对象

    通过类型询问

    您的应用寻找特定类型的传感器,如 Gyrometer3D。 传感器管理器询问电脑上显示的传感器硬件列表,然后返回绑定至该硬件的匹配对象的集合。 虽然传感器集合可能有 0 个、1 个或多个对象,但通常只有 1 个。 以下的 C++ 代码样本显示了使用传感器管理器对象的 GetSensorsByType方法搜索 3 轴陀螺仪,并在传感器集合中返回搜索结果。 注意:您必须首先 ::CoCreateInstance() the Sensor Manager Object。

    [xhtml]// Additional includes for sensors
    #include <InitGuid.h>
    #include <SensorsApi.h>
    #include <Sensors.h>
    // Create a COM interface to the SensorManager object.
    ISensorManager* pSensorManager = NULL;
    HRESULT hr = ::CoCreateInstance(CLSID_SensorManager, NULL, CLSCTX_INPROC_SERVER, 
        IID_PPV_ARGS(&pSensorManager));
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to CoCreateInstance() the SensorManager."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // Get a collection of all motion sensors on the computer.
    ISensorCollection* pSensorCollection = NULL;
    hr = pSensorManager->GetSensorsByType(SENSOR_TYPE_GYROMETER_3D, &pSensorCollection);
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to find any Gyros on the computer."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    [/xhtml]

    通过类别询问

    您的应用可以通过类别寻找传感器,比如运动传感器。 传感器管理器询问电脑上显示的传感器硬件列表,然后返回绑定至该硬件的运动对象的集合。 SensorCollection 中可能有 0 个、1 个或多个对象。 在大多数电脑上,集合都具有 2 个运动对象。 Accelerometer3D 和 Gyrometer3D。

    以下的 C++ 代码样本显示了使用传感器管理器对象的 GetSensorsByCategory方法搜索运动传感器,并在传感器集合中返回搜索结果。

    [xhtml]// Additional includes for sensors
    #include 
    #include 
    #include 
    // Create a COM interface to the SensorManager object.
    ISensorManager* pSensorManager = NULL;
    HRESULT hr = ::CoCreateInstance(CLSID_SensorManager, NULL, CLSCTX_INPROC_SERVER, 
        IID_PPV_ARGS(&pSensorManager));
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to CoCreateInstance() the SensorManager."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // Get a collection of all sensors on the computer.
    ISensorCollection* pSensorCollection = NULL;
    hr = pSensorManager->GetSensorsByCategory(SENSOR_CATEGORY_MOTION, &pSensorCollection);
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to find any sensors on the computer."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    [/xhtml]

    通过 Category ll?询问

    在实践中,适用于您应用的最有效方法就是在电脑上寻找所有传感器。 传感器管理器询问电脑上显示的传感器硬件列表,然后返回绑定至该硬件的所有对象的集合。 传感器集合中可能有 0 个、1 个或多个对象。 在大多数电脑上,集合都具有 7 个或以上对象。

    C++ 没有调用 GetAllSensors,所以您必须使用 GetSensorsByCategory(SENSOR_CATEGORY_ALL, …)替代以下样本代码中所示的内容。

    [xhtml]// Additional includes for sensors
    #include 
    #include 
    #include 
    // Create a COM interface to the SensorManager object.
    ISensorManager* pSensorManager = NULL;
    HRESULT hr = ::CoCreateInstance(CLSID_SensorManager, NULL, CLSCTX_INPROC_SERVER, 
        IID_PPV_ARGS(&pSensorManager));
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to CoCreateInstance() the SensorManager."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // Get a collection of all 3-axis Gyros on the computer.
    ISensorCollection* pSensorCollection = NULL;
    hr = pSensorManager->GetSensorsByCategory(SENSOR_CATEGORY_ALL, &pSensorCollection);
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to find any Motion sensors on the computer."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    [/xhtml]

    传感器生命周期 – 进入 (Enter) 和离开 (Leave) 事件

    在 Windows 上,与大多数硬件设备一样,传感器被视为即插即用设备。 首先您可能会问,“传感器是硬连接至电脑主板的,如果它们从未插入或拔下,我们为什么要担心即插即用呢?” 这可能发生在以下情形中:

    1. 可能系统外部有基于 USB 的传感器,并将其插入 USB 端口。

    2. 在连接和断开时,可能通过不可靠的无线接口(如蓝牙)或有线接口(如以太网)连接了传感器。

    3. 如果 Windows Update 升级传感器的设备驱动程序,它们将显示为断开连接,然后再重新连接。

    4. Windows 关闭(S4 或 S5)时,传感器显示为断开连接。

    在传感器操作中,即插即用称之为 “进入”(Enter)事件,断开称之为“离开”(Leave)事件。 需要有灵活的应用来处理这两种事件。

    “进入”事件回调

    可能在传感器插入时您的应用已经处于运行状态。此时,传感器管理器会报告传感器“进入”事件。 注: 如果在您的应用开始运行时传感器已经插入,您将无法获取这些传感器的“进入”事件。 在 C++/COM 中,您必须使用 SetEventSink方法 hook 回调。 回调不仅仅是一个函数,它必须是从 ISensorManagerEvents继承并执行 IUnknown的整类函数。 ISensorManagerEvents接口必须执行回调函数:

    STDMETHODIMP OnSensorEnter(ISensor *pSensor, SensorState state);

    [xhtml]// Hook the SensorManager for any SensorEnter events.
    pSensorManagerEventClass = new SensorManagerEventSink();  // create C++ class instance
    // get the ISensorManagerEvents COM interface pointer
    HRESULT hr = pSensorManagerEventClass->QueryInterface(IID_PPV_ARGS(&pSensorManagerEvents)); 
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Cannot query ISensorManagerEvents interface for our callback class."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // hook COM interface of our class to SensorManager eventer
    hr = pSensorManager->SetEventSink(pSensorManagerEvents); 
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Cannot SetEventSink on SensorManager to our callback class."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    [/xhtml]

    Code: Hook “进入”事件的回调

    以下是等同于“进入”回调的 C++/COM。 在该函数中,您可以从您的主循环中正常执行所有初始化步骤。 事实上,重构您的代码更为有效,这样您的主循环只调用 OnSensorEnter,模拟“进入”事件。

    [xhtml]STDMETHODIMP SensorManagerEventSink::OnSensorEnter(ISensor *pSensor, SensorState state)
    {
        // Examine the SupportsDataField for SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX.
        VARIANT_BOOL bSupported = VARIANT_FALSE;
        HRESULT hr = pSensor->SupportsDataField(SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX, &bSupported);
        if (FAILED(hr))
        {
            ::MessageBox(NULL, _T("Cannot check SupportsDataField for SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX."), 
                _T("Sensor C++ Sample"), MB_OK | MB_ICONINFORMATION);
            return hr;
        }
        if (bSupported == VARIANT_FALSE)
        {
            // This is not the sensor we want.
            return -1;
        }
        ISensor *pAls = pSensor;  // It looks like an ALS, memorize it. 
        ::MessageBox(NULL, _T("Ambient Light Sensor has entered."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONINFORMATION);
        .
        .
        .
        return hr;
    }
    [/xhtml]

    代码: 回调“进入”事件

    Leave Event

    单个传感器报告何时发生“离开”事件(并非传感器管理器)。 代码与之前 hook “进入”事件的回调相同。

    [xhtml]// Hook the Sensor for any DataUpdated, Leave, or StateChanged events.
    SensorEventSink* pSensorEventClass = new SensorEventSink();  // create C++ class instance
    ISensorEvents* pSensorEvents = NULL;
    // get the ISensorEvents COM interface pointer
    HRESULT hr = pSensorEventClass->QueryInterface(IID_PPV_ARGS(&pSensorEvents)); 
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Cannot query ISensorEvents interface for our callback class."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    hr = pSensor->SetEventSink(pSensorEvents); // hook COM interface of our class to Sensor eventer
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Cannot SetEventSink on the Sensor to our callback class."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    [/xhtml]

    代码: Hook“离开”事件的回调

    OnLeave事件处理程序以参数形式接收离开传感器的 ID。

    [xhtml]STDMETHODIMP SensorEventSink::OnLeave(REFSENSOR_ID sensorID)
    {
        HRESULT hr = S_OK;
        ::MessageBox(NULL, _T("Ambient Light Sensor has left."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONINFORMATION);
        // Perform any housekeeping tasks for the sensor that is leaving.
        // For example, if you have maintained a reference to the sensor,
        // release it now and set the pointer to NULL.
        return hr;
    }
    [/xhtml]

    代码: 回调“离开”事件

    为您的应用挑选传感器

    我们关心传感器是因为它们告知我们的内容。 不同类型的传感器向我们传达不同的信息。 微软将这些信息称之为“数据域”(Data Fields),它们集合在一个 SensorDataReport 中。 您的电脑可能(潜在)具有多种类型的传感器,会向您的应用传达您关心的信息。 您的应用可能并不关心是从哪个传感器中获得的信息,只要能够获得信息即可。

    表 3 显示了 Win32/COM 和 .NET 最常用“数据域”的常量名称。与传感器标识符一样,这些常量只是代表大量数字的人类可读的名称。 除了微软之前预定义的“well known”数据域,还提供了数据域的扩展性。 还有很多其它“well known” ID 等待您的开发。

    表 3: 数据域标识符常量

    常量 (Win32/COM)常量 (.NET)PROPERTYKEY(GUID、PID)
    SENSOR_DATA_TYPE_TIMESTAMPSensorDataTypeTimestamp{DB5E0CF2-CF1F-4C18-B46C-D86011D62150},2
    SENSOR_DATA_TYPE_LIGHT_LEVEL_LUXSensorDataTypeLightLevelLux{E4C77CE2-DCB7-46E9-8439-4FEC548833A6},2
    SENSOR_DATA_TYPE_ACCELERATION_X_GSensorDataTypeAccelerationXG{3F8A69A2-07C5-4E48-A965-CD797AAB56D5},2
    SENSOR_DATA_TYPE_ACCELERATION_Y_GSensorDataTypeAccelerationYG{3F8A69A2-07C5-4E48-A965-CD797AAB56D5},3
    SENSOR_DATA_TYPE_ACCELERATION_Z_GSensorDataTypeAccelerationZG{3F8A69A2-07C5-4E48-A965-CD797AAB56D5},4
    SENSOR_DATA_TYPE_ANGULAR_VELOCITY_X_DEG
    REES_PER_SECOND
    SensorDataTypeAngularVelocityXDegreesPerSecond{3F8A69A2-07C5-4E48-A965-CD797AAB56D5},10
    SENSOR_DATA_TYPE_ANGULAR_VELOCITY_X_DE
    GREES_PER_SECOND
    SensorDataTypeAngularVelocityXDegreesPerSecond{3F8A69A2-07C5-4E48-A965-CD797AAB56D5},10
    SENSOR_DATA_TYPE_ANGULAR_VELOCITY_Y_DE
    GREES_PER_SECOND
    SensorDataTypeAngularVelocityYDegreesPerSecond{3F8A69A2-07C5-4E48-A965-CD797AAB56D5},11
    SENSOR_DATA_TYPE_ANGULAR_VELOCITY_Y_DE
    GREES_PER_SECOND
    SensorDataTypeAngularVelocityYDegreesPerSecond{3F8A69A2-07C5-4E48-A965-CD797AAB56D5},11
    SENSOR_DATA_TYPE_ANGULAR_VELOCITY_Z_DE
    GREES_PER_SECOND
    SensorDataTypeAngularVelocityZDegreesPerSecond{3F8A69A2-07C5-4E48-A965-CD797AAB56D5},12
    SENSOR_DATA_TYPE_TILT_X_DEGREESSensorDataTypeTiltXDegrees{1637D8A2-4248-4275-865D-558DE84AEDFD},2
    SENSOR_DATA_TYPE_TILT_Y_DEGREESSensorDataTypeTiltYDegrees{1637D8A2-4248-4275-865D-558DE84AEDFD},3
    SENSOR_DATA_TYPE_TILT_Z_DEGREESSensorDataTypeTiltZDegrees{1637D8A2-4248-4275-865D-558DE84AEDFD},4
    SENSOR_DATA_TYPE_MAGNETIC_HEADING_COM
    PENSATED_MAGNETIC_NORTH_DEGREES
    SensorDataTypeMagneticHeadingCompensated
    TrueNorthDegrees
    {1637D8A2-4248-4275-865D-558DE84AEDFD},11
    SENSOR_DATA_TYPE_MAGNETIC_FIELD_STRENGTH
    _X_MILLIGAUSS
    SensorDataTypeMagneticFieldStrengthXMilligauss{1637D8A2-4248-4275-865D-558DE84AEDFD},19
    SENSOR_DATA_TYPE_MAGNETIC_FIELD_STRENGTH
    _Y_MILLIGAUSS
    SensorDataTypeMagneticFieldStrengthYMilligauss{1637D8A2-4248-4275-865D-558DE84AEDFD},20
    SENSOR_DATA_TYPE_MAGNETIC_FIELD_STRENGTH
    _Z_MILLIGAUSS
    SensorDataTypeMagneticFieldStrengthZMilligauss{1637D8A2-4248-4275-865D-558DE84AEDFD},21
    SENSOR_DATA_TYPE_QUATERNIONSensorDataTypeQuaternion{1637D8A2-4248-4275-865D-558DE84AEDFD},17
    SENSOR_DATA_TYPE_QUATERNIONSensorDataTypeQuaternion{1637D8A2-4248-4275-865D-558DE84AEDFD},17
    SENSOR_DATA_TYPE_ROTATION_MATRIXSensorDataTypeRotationMatrix{1637D8A2-4248-4275-865D-558DE84AEDFD},16
    SENSOR_DATA_TYPE_LATITUDE_DEGREESSensorDataTypeLatitudeDegrees{055C74D8-CA6F-47D6-95C6-1ED3637A0FF4},2
    SENSOR_DATA_TYPE_LONGITUDE_DEGREESSensorDataTypeLongitudeDegrees{055C74D8-CA6F-47D6-95C6-1ED3637A0FF4},3
    SENSOR_DATA_TYPE_ALTITUDE_ELLIPSOID_METERSSensorDataTypeAltitudeEllipsoidMeters{055C74D8-CA6F-47D6-95C6-1ED3637A0FF4},5

    使得数据域标识符与传感器 ID 不同的原因是使用了名为 PROPERTYKEY 的数据类型。 一个 PROPERTYKEY 包括一个 GUID(类似于传感器的 GUID)以及一个名为“PID”的额外编号(属性 ID)。 您可能会注意到 PROPERTYKEY 的 GUID 部分对于同一类别的传感器是通用的。 数据域的所有值都具有本机数据类型,例如Boolean、unsigned char、int、float、double 等。

    在 Win32/COM 中,数据域的值存储在名为 PROPVARIANT 的多态数据类型中。 在 .NET 中,有一个名为“对象”(object) 的 CLR(通用语言运行时)数据类型执行相同的操作。 您必须询问和/或将多态数据类型转换为“expected”/“documented”数据类型。

    使用传感器的 SupportsDataField()方法检查传感器,获取感兴趣的数据域。 这是我们选择传感器时经常使用的编程术语。 根据您应用的使用模式,您可能仅需一个数据域子集,而并非全部。 选择您想要的传感器,基于它们是否支持您所需的数据域。 注意:您还需要使用类型转换从基本类传感器分配子类成员变量。

    [xhtml]ISensor* m_pAls;
    ISensor* m_pAccel;
    ISensor* m_pTilt;
    // Cycle through the collection looking for sensors we care about.
    ULONG ulCount = 0;
    HRESULT hr = pSensorCollection->GetCount(&ulCount);
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to get count of sensors on the computer."), _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    for (int i = 0; i < (int)ulCount; i++)
    {
        hr = pSensorCollection->GetAt(i, &pSensor);
        if (SUCCEEDED(hr))
        {
            VARIANT_BOOL bSupported = VARIANT_FALSE;
            hr = pSensor->SupportsDataField(SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX, &bSupported);
            if (SUCCEEDED(hr) && (bSupported == VARIANT_TRUE)) m_pAls = pSensor;
            hr = pSensor->SupportsDataField(SENSOR_DATA_TYPE_ACCELERATION_Z_G, &bSupported);
            if (SUCCEEDED(hr) && (bSupported == VARIANT_TRUE)) m_pAccel = pSensor;
            hr = pSensor->SupportsDataField(SENSOR_DATA_TYPE_TILT_Z_DEGREES, &bSupported);
            if (SUCCEEDED(hr) && (bSupported == VARIANT_TRUE)) m_pTilt = pSensor;
            .
            .
            .
        }
    }
    [/xhtml]

    代码: 使用传感器中的 SupportsDataField() 方法查看支持的数据域。

    传感器属性

    除了数据域,传感器还具有可用于辨识和配置的属性。 表 4 显示了最常用的属性。 与数据域类似,属性也有 Win32/COM 和 .NET 使用的常量名称,而且这些常量确实是下面的 PROPERTYKEY 数字。 属性可通过厂商扩展,还具有 PROPVARIANT 多态数据类型。 不同于数据域的只读特性,属性具有读/写能力。 它取决于单个传感器是否拒绝写入尝试。 作为一名应用开发人员,您需要执行写-读-验证,因为尝试写入失败时不会发生异常情况。

    表 4: 常用的传感器属性和 PID

    标识 (Win32/COM)标识 (.NET)PROPERTYKEY(GUID、PID)
    SENSOR_PROPERTY_PERSISTENT_UNIQUE_IDSensorID{7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},5
    WPD_FUNCTIONAL_OBJECT_CATEGORYCategoryID{8F052D93-ABCA-4FC5-A5AC-B01DF4DBE598},2
    SENSOR_PROPERTY_TYPETypeID{7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},2
    SENSOR_PROPERTY_STATEState{7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},3
    SENSOR_PROPERTY_MANUFACTURERSensorManufacturer7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},6
    SENSOR_PROPERTY_MODELSensorModel{7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},7
    SENSOR_PROPERTY_SERIAL_NUMBERSensorSerialNumber(7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},8
    SENSOR_PROPERTY_FRIENDLY_NAMEFriendlyName{7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},9
    SENSOR_PROPERTY_DESCRIPTIONSensorDescription{7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},10
    SENSOR_PROPERTY_MIN_REPORT_INTERVALMinReportInterval{7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},12
    SENSOR_PROPERTY_CONNECTION_TYPESensorConnectionType{7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},11
    SENSOR_PROPERTY_DEVICE_IDSensorDevicePath{7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},15
    SENSOR_PROPERTY_RANGE_MAXIMUMSensorRangeMaximum{7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},21
    SENSOR_PROPERTY_RANGE_MINIMUMSensorRangeMinimum{7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},20
    SENSOR_PROPERTY_ACCURACYSensorAccuracy{7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},17
    SENSOR_PROPERTY_RESOLUTIONSensorResolution{7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},18


    Configuration(Win32/COM)Configuration(.NET)PROPERTYKEY (GUID,PID)
    SENSOR_PROPERTY_CURRENT_REPORT_INTERVALReportInterval{7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},13
    SENSOR_PROPERTY_CHANGE_SENSITIVITYChangeSensitivity{7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},14
    SENSOR_PROPERTY_REPORTING_STATEReportingState{7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},27

    设置传感器敏感度

    敏感度设置可能是最有用的传感器属性。 可用于分配控制或过滤发送至主机计算机的 SensorDataReports 数量的阈值。 流量可通过这种方式得以降低: 仅发出那些真正会干扰主机 CPU 的 DataUpdated 事件。 微软定义这一敏感度属性数据类型的方式有稍许不同。 它是一个容器类型,在 Win32/COM中称之为“IPortableDeviceValues”,在 .NET 中称之为“ensorPortableDeviceValues”。 容器中包含一个元组集合,其中每个都是一个数据域 PROPERTYKEY,随后是该数据域的敏感度值。 敏感度通常使用与匹配数据相同的测量单位和数据类型。

    // Configure sensitivity
    // create an IPortableDeviceValues container for holding the <Data Field, Sensitivity> tuples.
    IPortableDeviceValues* pInSensitivityValues;
    hr = ::CoCreateInstance(CLSID_PortableDeviceValues, NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&pInSensitivityValues));
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to CoCreateInstance() a PortableDeviceValues collection."), _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // fill in IPortableDeviceValues container contents here: 0.1 G sensitivity in each of X, Y, and Z axes.
    PROPVARIANT pv;
    PropVariantInit(&pv);
    pv.vt = VT_R8; // COM type for (double)
    pv.dblVal = (double)0.1;
    pInSensitivityValues->SetValue(SENSOR_DATA_TYPE_ACCELERATION_X_G, &pv);
    pInSensitivityValues->SetValue(SENSOR_DATA_TYPE_ACCELERATION_Y_G, &pv);
    pInSensitivityValues->SetValue(SENSOR_DATA_TYPE_ACCELERATION_Z_G, &pv);
    // create an IPortableDeviceValues container for holding the <SENSOR_PROPERTY_CHANGE_SENSITIVITY, pInSensitivityValues> tuple.
    IPortableDeviceValues* pInValues;
    hr = ::CoCreateInstance(CLSID_PortableDeviceValues, NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&pInValues));
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to CoCreateInstance() a PortableDeviceValues collection."), _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // fill it in
    pInValues->SetIPortableDeviceValuesValue(SENSOR_PROPERTY_CHANGE_SENSITIVITY, pInSensitivityValues);
    // now actually set the sensitivity
    IPortableDeviceValues* pOutValues;
    hr = pAls->SetProperties(pInValues, &pOutValues);
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to SetProperties() for Sensitivity."), _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // check to see if any of the setting requests failed
    DWORD dwCount = 0;
    hr = pOutValues->GetCount(&dwCount);
    if (FAILED(hr) || (dwCount > 0))
    {
        ::MessageBox(NULL, _T("Failed to set one-or-more Sensitivity values."), _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    PropVariantClear(&pv);
    

    申请传感器权限

    最终用户可能会考虑传感器提供的信息是敏感的,即个人可识别身份信息 (PII)。 计算机的位置等数据域(如纬度和经度)可以用于追踪用户。 因此在使用前,Windows 强制应用获取最终用户权限,以访问传感器。 如果需要,使用传感器的“State”属性以及 SensorManager 的 RequestPermissions()方法。

    RequestPermissions()方法将一组传感器作为一个参数,所以如果需要,可以一次为多个传感器申请权限。 C++/COM 代码显示如下。 注意:您必须向 RequestPermissions()提供一个 (ISensorCollection *) 参数。

    [xhtml]// Get the sensor's state
    SensorState state = SENSOR_STATE_ERROR;
    HRESULT hr = pSensor->GetState(&state);
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Unable to get sensor state."), _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    // Check for access permissions, request permission if necessary.
    if (state == SENSOR_STATE_ACCESS_DENIED)
    {
        // Make a SensorCollection with only the sensors we want to get permission to access.
        ISensorCollection *pSensorCollection = NULL;
        hr = ::CoCreateInstance(CLSID_SensorCollection, NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&pSensorCollection));
        if (FAILED(hr))
        {
            ::MessageBox(NULL, _T("Unable to CoCreateInstance() a SensorCollection."), 
                _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
            return -1;
        }
        pSensorCollection->Clear();
        pSensorCollection->Add(pAls); // add 1 or more sensors to request permission for...
        // Have the SensorManager prompt the end-user for permission.
        hr = m_pSensorManager->RequestPermissions(NULL, pSensorCollection, TRUE);
        if (FAILED(hr))
        {
            ::MessageBox(NULL, _T("No permission to access sensors that we care about."), 
                _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
            return -1;
        }
    }
    [/xhtml]

    传感器数据更新

    传感器通过发出名为 DataUpdated 的事件来报告数据。 实际的数据域在 SensorDataReport 内打包,被传输至所有附带的 DataUpdated 事件处理程序中。 您的应用通过将 hook 一个回调处理程序至传感器的 DataUpdated 事件获取 SensorDataReport。 事件发生在 Windows Sensor Framework 线程,该线程与用于更新您应用 GUI 的消息泵线程不同。 因此,您将需要将 SensorDataReport 从事件处理程序 (Als_DataUpdate) 传至可以在 GUI 线程环境中执行的单独的处理程序 (Als_UpdateGUI)。 在 .NET 中,此类处理程序称之为委托函数。

    以下示例显示了委托函数的实现。 在 C++/COM 中,您必须使用 SetEventSink 方法 hook 回调。 回调不仅仅是一个函数,它必须是从 ISensorEvents 继承并执行 IUnknown 的整类函数。 ISensorEvents 接口必须执行回调函数:

    [xhtml]STDMETHODIMP OnEvent(ISensor *pSensor, REFGUID eventID, IPortableDeviceValues *pEventData);
    	STDMETHODIMP OnDataUpdated(ISensor *pSensor, ISensorDataReport *pNewData);
    	STDMETHODIMP OnLeave(REFSENSOR_ID sensorID);
    	STDMETHODIMP OnStateChanged(ISensor* pSensor, SensorState state);
    // Hook the Sensor for any DataUpdated, Leave, or StateChanged events.
    SensorEventSink* pSensorEventClass = new SensorEventSink();  // create C++ class instance
    ISensorEvents* pSensorEvents = NULL;
    // get the ISensorEvents COM interface pointer
    HRESULT hr = pSensorEventClass->QueryInterface(IID_PPV_ARGS(&pSensorEvents)); 
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Cannot query ISensorEvents interface for our callback class."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    hr = pSensor->SetEventSink(pSensorEvents); // hook COM interface of our class to Sensor eventer
    if (FAILED(hr))
    {
        ::MessageBox(NULL, _T("Cannot SetEventSink on the Sensor to our callback class."), 
            _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
        return -1;
    }
    [/xhtml]

    代码: 为传感器设置一个 COM 事件接收器

    DataUpdated 事件处理程序以参数形式接收 SensorDataReport(以及初始化事件的传感器)。 它调用表格中的 Invoke() 方法将这些条目转至委托函数。 GUI 线程运行转至其 Invoke 队列的委托函数并将参数传输至该函数。 委托函数将 SensorDataReport 的数据类型转换为所需的子类,获得数据域访问。 数据域是使用 SensorDataReport 对象中的 GetDataField()方法提取的。 每个数据域都必须将类型转换至它们的“expected”/“documented”数据类型(从使用 GetDataField()方法返回的普通/多态数据类型)。 然后应用会在 GUI 中排列并显示数据。

    OnDataUpdated 事件处理程序以参数形式接收 SensorDataReport(以及初始化事件的传感器)。 数据域是使用 SensorDataReport 对象中的 GetSensorValue()方法提取的。 每个数据域需具有自己的 PROPVARIANT,以检查它们的“expected”/“documented”数据类型。 然后应用会在 GUI 中排列并显示数据。 不需要使用同等的 C# 委托。 这是因为所有 C++ GUI 函数(如此处显示的 SetWindowText())使用 Windows 消息传递将 GUI 更新转至 GUI 线程/消息循环(您主窗口或对话框的 WndProc)。

    [xhtml]STDMETHODIMP SensorEventSink::OnDataUpdated(ISensor *pSensor, ISensorDataReport *pNewData)
    {
        HRESULT hr = S_OK;
        if ((NULL == pNewData) || (NULL == pSensor)) return E_INVALIDARG;
        float fLux = 0.0f;
        PROPVARIANT pv = {};
        hr = pNewData->GetSensorValue(SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX, &pv);
        if (SUCCEEDED(hr))
        {
            if (pv.vt == VT_R4) // make sure the PROPVARIANT holds a float as we expect
            {
                // Get the lux value.
                fLux = pv.fltVal;
                // Update the GUI
                wchar_t *pwszLabelText = (wchar_t *)malloc(64 * sizeof(wchar_t));
                swprintf_s(pwszLabelText, 64, L"Illuminance Lux: %.1f", fLux);
                BOOL bSuccess = ::SetWindowText(m_hwndLabel, (LPCWSTR)pwszLabelText);
                if (bSuccess == FALSE)
                {
                    ::MessageBox(NULL, _T("Cannot SetWindowText on label control."), 
                        _T("Sensor C++ Sample"), MB_OK | MB_ICONERROR);
                }
                free(pwszLabelText);
            }
        }
        PropVariantClear(&pv);
        return hr;
    }
    [/xhtml]

    您可以仅参考 SensorDataReport 对象中的属性从 SensorDataReport 中提取数据域。 这仅适用于 .NET API(在 Win32/COM API 中,您必须使用 GetDataField 方法),及特定 SensorDataReport 子类的“well known”或“expected”数据域。 可以(使用“动态数据域”)为以下驱动程序固件“搭载” SensorDataReports 内的任意“extended/unexpected”数据域。 为了提取它们,您必须使用 GetDataField 方法。

    在 Metro 风格应用中使用传感器

    不同于 Desktop 模式,Metro/WinRT 传感器 API 对每个传感器遵循一个通用模板。

    • 这通常是一个名为 ReadingChanged的单个事件,使用包含 Reading 对象(具有实际数据)的 xxxReadingChangedEventArgs 调用回调。 (加速器是一个例外,它还具有 Shaken 事件)。
    • 传感器类的硬件绑定实例使用 GetDefault()方式检索。
    • 可以通过 GetCurrentReading()方法执行轮询。

    Metro 风格应用一般使用JavaScript* 或 C# 编写。 API 有不同的语言绑定,这导致 API 名称中的大写稍有不同以及事件处理方式也稍有不同。 简化的 API 更易于使用,表 5 中列出了利弊。

    表 5: Metro 风格应用的传感器 API 以及利弊

    特性
    SensorManager没有 SensorManager 需要处理。 应用使用 GetDefault() 方式获取传感器类实例。
    • 可能无法搜索任意传感器实例。 如果计算机上存在多个特定类型的传感器,您将只能看到“第一个”。
    • 可能无法使用 GUID 搜索任意传感器类型或类别。 厂商增值扩展不可用。
    事件应用仅关注 DataUpdated 事件。
    • 应用无法访问 Enter、Leave、 StatusChanged 或任意事件类型。 厂商增值扩展不可用。
    传感器属性应用仅关注 ReportInterval 属性。
    • 应用无法访问其他属性,包括最有用的: 敏感度。
    • 除了操控 ReportInterval 属性,Metro 风格应用无法调整或控制数据报告的流量。
    • 应用无法通过 PROPERTYKEY. 访问任意属性。 厂商增值扩展不可用。
    数据报告属性应用仅关注仅存在于每个传感器中的少数预定义数据域。
    • 应用无法访问其他数据域。 如果传感器在 Metro 风格应用预计之外的数据报告中“搭载”其他已知数据域,该数据域将不可用。
    • 应用无法通过 PROPERTYKEY. 访问任意数据域。 厂商增值扩展不可用。
    • 应用无法在运行时查询传感器支持哪些数据域。 它仅可以假定 API 预定义的数据域。

    总结

    Windows 8 API 支持开发人员在传统的 Desktop 模式和全新的 Windows* 8商店应用接口下在不同的平台上使用传感器。 在本文中,我们概述了开发人员在 Windows 8 内创建应用可用的传感器 API,重点是 Desktop 模式应用的 API 和代码样本。

    附录

    不同外形的坐标系统

    Windows API 通过与 HTML5 标准(和Android*)兼容的方式报告 X、Y 和 Z 轴。 它还称之为“ENU”系统,因为 X 面向虚拟的“”(E)、Y 面向虚拟的“(N)、而 Z 面向“(U)。

    如要弄清楚旋转的方向,请使用“右手定则”。

    *右手拇指指向其中一个轴的方向。

    * 沿该轴正角旋转将顺着您手指的曲线。

    这些是面向平板电脑或者手机(左)和蛤壳式电脑的 X、Y 和 Z 轴。 对于更复杂的外形(如可转换为平板的蛤壳式),“标准”方向是其处于“TABLET”(平板)状态时。

    如果您想要开发一个导航应用(如 3D 空间游戏),您需要在您的程序中从“ENU”转换。 可通过矩阵乘法轻松完成该操作。 Direct3D* 和 OpenGL* 等图形库都有可处理这一操作的 API。

    资源

    Win 7 传感器 API: http://msdn.microsoft.com/library/windows/desktop/dd318953(VS.85).aspx

    传感器 API 编程指南: http://msdn.microsoft.com/en-us/library/dd318964(v=vs.85).aspx

    集成运动与方向传感器: http://msdn.microsoft.com/en-us/library/windows/hardware/br259127.aspx

    作者简介

    Deepak Vembar

    Deepak Vembar 是英特尔实验室交互与体验研究 (IXR) 事业部的一位研究员。 他的研究主要关注计算机图形交互和人机交互,包括实时图形、虚拟现实、触觉、眼睛追踪和用户交互等领域。 在进入英特尔实验室之前,Deepak 是英特尔软件与服务事业部 (SSG) 的一位软件工程师,与电脑游戏开发人员一起针对英特尔平台优化游戏、传授异构平台优化课程和指南以及使用游戏演示编写大学课程(作为教学媒体在学校课程中使用)。

    Deepak 拥有克莱姆森大学计算学院的博士学位,当时他主要研究如何使用计算机培训模拟器改善飞机检查。 他的论文“Visuohaptic simulation of a borescope for aircraft engine inspection”讨论了将现成的触觉设备与计算机模拟器相结合,培训新手巡查员正确检查飞机引擎。 他还拥有克莱姆森大学的学士学位,他的论文“Towards Improved Behavioral Realism in Avatars”主要研究了使用 2 个电磁追踪仪对手势进行识别和分类。 他还与他人合作撰写了 13 篇论文并在在多场学术会议中讨论和发表,包括 IEEE 3DUI、Graphics Interface 和游戏开发者大会 (GDC)。

    声明

    本文件中包含关于英特尔产品的信息。 本文不代表英特尔公司或其它机构向任何人明确或隐含地授予任何知识产权。 除相关产品的英特尔销售条款与条件中列明之担保条件以外,英特尔公司不对销售和/或使用英特尔产品做出其它任何明确或隐含的担保,包括对适用于特定用途、适销性,或不侵犯任何专利、版权或其它知识产权的担保。

    除非经过英特尔的书面同意认可,英特尔的产品无意被设计用于或被用于以下应用:即在这样的应用中可因英特尔产品的故障而导致人身伤亡。

    英特尔有权随时更改产品的规格和描述而毋需发出通知。 设计者不应信赖任何英特产品所不具有的特性,设计者亦不应信赖任何标有保留权利“或未定义”说明或特性描述。 英特尔保留今后对其定义的权利,对于因今后对其进行修改所产生的冲突或不兼容性概不负责。 此处提供的信息可随时改变而毋需通知。 请勿根据本文件提供的信息完成一项产品设计。

    本文件所描述的产品可能包含使其与宣称的规格不符的设计缺陷或失误。 这些缺陷或失误已收录于勘误表中,可索取获得。

    在发出订单之前,请联系当地的英特尔营业部或分销商以获取最新的产品规格。

    如欲获得本文或其它英特尔文献中提及的带订单编号的文档副本,可致电 1-800-548-4725,或访问: http://www.intel.com/design/literature.htm

    性能测试中的软件和工作负载可能仅在英特尔微处理器上针对性能进行了优化。 SYSmark 和 MobileMark 等性能测试使用特定的计算机系统、组件、软件、操作和功能进行测量。 上述任何要素的变动都有可能导致测试结果的变化。 请参考其他信息及性能测试(包括结合其他产品使用时的运行性能)以对目标产品进行全面评估。

    本文档转载的软件源代码根据软件许可证提供,并且只能在许可证条款下使用或复制。

    英特尔和 Intel 标识是英特尔公司在美国和/或其他国家(地区)的商标。

    英特尔公司 2012 年版权所有。 所有权利保留。

    *文中涉及的其它名称及商标属于各自持有者。

    性能声明

    有关性能声明和性能指标评测结果的完整信息,请访问: www.intel.com/benchmarks

    优化声明

    优化声明

    英特尔编译器针对非英特尔微处理器的优化程度可能与英特尔微处理器相同(或不同)。 这些优化包括 SSE2、SSE3 和 SSSE3 指令集以及其它优化。 对于在非英特尔制造的微处理器上进行的优化,英特尔不对相应的可用性、功能或有效性提供担保。

    该产品中依赖于处理器的优化仅适用于英特尔微处理器。 部分非针对英特尔微体系架构的优化也为英特尔微处理器保留了下来。 如欲了解更多有关本声明所涉及的特定指令集的信息,请参阅适用产品的《用户和参考指南》。



    声明版本 #20110804



    要了解有关编译器优化的更多信息,请参阅优化声明



  • Curated Home
  • Microsoft Windows* 8
  • Windows*
  • Graphics
  • Microsoft Windows* 8 Desktop
  • Sensors
  • URL

  • Introduction to Intel Perceptual Computing SDK: HelloCam!

    $
    0
    0

    Download


    Introduction to Intel® Perceptual Computing SDK: HelloCam [PDF 305 KB]

    When you want to learn a new programming language or SDK, the first step is usually is to write a program called "Hello World!" Here, we are talking about how to use the web cam features of the Intel® Perceptual Computing SDK, so we will call our first program "Hello Cam!"

    But first, a bit of architecture. The following figure shows the architectural model of the Intel Perceptual Computing SDK:


    Figure 1. Intel Perceptual Computing SDK Architecture

    In this article we will build an example using the functionality provided by .NET porting of Perceptual SDK (shown in red in Figure 1).
    In particular, we will use the classes inside libpxcclr.dll, which is located in the folders:

    $(PCSDK_DIR)binwin32
    $(PCSDK_DIR)binx64 
    

    This library is a wrapper for the C++ classes and functionalities by exposing them to the .NET world (both C# and VB.NET).

    The Windows Form project


    To begin, we open Visual Studio* 2012 and create a new Windows Forms project as shown in the following figure.

    Once the project is created, we need to add a reference to the dll using the command "Add Reference ..."

    In this project, we will use the features made available by the class UtilMPipeline. This class works as a bridge between the .NET world and class UtilPipeline (written in C++ and documented here http://software.intel.com/sites/landingpage/perceptual_computing/documentation/html/utilpipeline.html) . This class is the easiest way to access the functionalities offered by the SDK.

    In this sample, we will just acquire images from the web cam and show them in our GUI. In particular, we implement our own class that extends the UtilMPipeline.

    Public Class ImageCamPipeline
        Inherits UtilMPipeline
        Public Sub New()
            MyBase.New()
            EnableImage(PXCMImage.ColorFormat.COLOR_FORMAT_RGB24, 640, 480)
        End Sub
    
    End Class
    
    

    The class constructor enables us to use the web cam module of the base class (in our example the image format is RGB 24 bit and the size is 640 by 480 pixels) called the EnableImage method. We will see later the other possibilities offered by EnableImage method and by the enumeration PXCMImage.ColorFormat.

    First of all, we can note that all "managed" classes (.NET world) have the letter M in between the prefix that indicates the area of use and a suffix that indicates the functionalities. For example, the PXCMImage is the .NET class that implements the interface PXCImage of the Perceptual Computing framework.

    To manage the image flow from the web cam and show them to our GUI, we need to override the OnImage method of UtilMPipeline.

        Public Overrides Sub OnImage(image As PXCMImage)
            ' retrieve the active session
            Dim session = Me.QuerySession()
            If session IsNot Nothing Then
                Dim bitmap As Bitmap
                ' retrieve bitmap from webcam
                Dim pcmStatus = image.QueryBitmap(session, lastProcessedBitmap)
                If pcmStatus = pxcmStatus.PXCM_STATUS_NO_ERROR Then
                    ' Create a new bitmap and fire event to GUI
                    bitmap = New Bitmap(lastProcessedBitmap)
                    RaiseEvent ImageCaptured(Me, New ImageCapturedEventArgs() With {.Image = bitmap,
                                                                                    .TimeStamp = DateTime.Now,
                                                                                    .FPS = CalculateFPS(.TimeStamp)})
                End If
            End If
        End Sub
    
    

    ImageCamPipeline fires the ImageCaptured event when a new image is ready and the GUI can manage the event to show the image.

    Public Event ImageCaptured(sender As Object, e As ImageCapturedEventArgs)
    

    In our sample, the event argument contains the image, the capture timestamp (date and time) and the instantaneous frame rate (calculated as the difference between the timestamps of two consecutive images).

    Public Class ImageCapturedEventArgs
        Inherits EventArgs
        Public Property Image As Bitmap
        Public Property TimeStamp As DateTime
        Public Property FPS As Integer?
    End Class
    
    
    


    The instantaneous frame rate is calculated from the difference of timestamps of two consecutive images using the following function:
        Private Function CalculateFPS(currentTimeStamp As Date) As Integer?
            Dim fps As Integer? = Nothing
            If LastImageTimestamp.HasValue Then
                fps = CInt(Math.Floor(1000D / (currentTimeStamp - LastImageTimestamp.Value).TotalMilliseconds))
            End If
            Me.LastImageTimestamp = currentTimeStamp
            Return fps
        End Function
    
    


    Once we have created the class that interacts with the SDK functionality, we can begin to create the interface that will display the images and the frame rate.
    The following figure shows the Visual Studio designer for our form:


    The code behind of the form is quite simple:

    Public Class Form1
        Private pipeline As ImageCamPipeline
       Private Sub Form1_FormClosing(sender As Object, e As FormClosingEventArgs) Handles Me.FormClosing
            pipeline.Dispose()
        End Sub
        Private Sub Form1_Load(sender As Object, e As EventArgs) Handles Me.Load
            pipeline = New ImageCamPipeline
            AddHandler pipeline.ImageCaptured, AddressOf ImageCapturedHandler
            pipeline.LoopFrames()
        End Sub
        Private Sub ImageCapturedHandler(sender As Object, e As ImageCapturedEventArgs)
            UpdateInterface(e)
        End Sub
        Private Delegate Sub UpdateInterfaceDelegate(e As ImageCapturedEventArgs)
        Public Sub UpdateInterface(e As ImageCapturedEventArgs)
            If Me.InvokeRequired Then
                Me.Invoke(New UpdateInterfaceDelegate(AddressOf UpdateInterface), e)
            Else
                pctImage.Image = e.Image
                If e.FPS.HasValue Then
                    lblFPS.Text = String.Format("{0} fps", e.FPS.Value)
                Else
                    lblFPS.Text = String.Empty
                End If
            End If
        End Sub
    End Class
    
    

    The form has an instance of ImageCamPipeline that is created in the load form event. Once we have created the ImageCamPipeline's instance, we add the event handler for ImageCaptured and finally we call the LoopFrames method to start the image capture.

    The LoopFrames method, as we can read in the official documentation (http://software.intel.com/sites/landingpage/perceptual_computing/documentation/html/loopframes_util pipeline.html), starts a loop that acquires and releases frames. Reading the official documentation, we see that the pseudo-code executed by the method is as follows:

    if (!Init()) return false;
     for (;;) {
             // some device hot-plug code omitted.
             if (!AcquireFrame(true)) break;
             if (!ReleaseFrame()) break;
     }
     Close();
     return true
    
    

    This class performs three main steps. It:

    1. is initialized (Init method)
    2. runs an endless loop in which the frame is retrieved from the cam and, subsequently, released (AcquireFrame and ReleaseFrame methods)
    3. closes the connection with the device (Close method)

    LoopFrames blocks the thread that calls it, and in our interface, this means that the GUI is unable to start because the LoopFrames method is called in the Load event handler that takes place in UI main thread.
    To solve the problem, we can override the LoopFrames method so that the LoopFrames method of the base class is run in a new thread. The new thread won't block the main thread and our UI can start.

        Public Overrides Function LoopFrames() As Boolean
            If LoopFramesThread Is Nothing Then
                LoopFramesThread = New Thread(New ThreadStart(AddressOf LoopFramesCode))
                LoopFramesThread.Start()
                Return True
            Else
                Return False
            End If
        End Function
        Private LoopFramesThread As Thread
         Private Sub LoopFramesCode()
            Try
                RaiseEvent PipelineStarted(Me, EventArgs.Empty)
                Dim result = MyBase.LoopFrames()
                If Not result Then RaiseEvent PipelineError(Me, EventArgs.Empty)
            Catch ex As Exception
                RaiseEvent PipelineStopped(Me, EventArgs.Empty)
            End Try
        End Sub
    
    

    We use Try...Catch in the method to manage the thread abort that may occur when we close the application.
    The PipelineStarted, PipelineError e PipelineStopped events allow us to notify the UI when the pipeline starts if an error occurs and the pipeline is stopped.
    We might have an error if we try to use a feature not available on the web cam that we are using. For example, if we try to take a depth image and we are not using a Creative* Interactive Gesture Camera.

    Redefining the LoopFrames method, our pipeline is multi-threaded and the GUI is fluid and responsive. As the LoopFrames method is performed in a secondary thread, the ImageCaptured event, raised from the pipeline, occurs in a thread that is not the UI main thread.

    Since Windows Forms are STAs (single-threaded apartments), we are forced to update our interface using the InvokeRequired property plus the Invoke method (UpdateInterface method). The sample is quite simple, but it allows you to see how the Perceptual Computing SDK works.
    One of the interesting features of the Perceptual Computing SDK is that certain features will work even if you are not using a Creative Cam. For example, if you retrieve RGB images from the web cam, you can run the code also using your PC’s web cam.

    With a Creative Cam you can retrieve other types of images (such as Depth image) and, in general, increase the performance. For example, the following figure shows a comparison between the integrated web cam on my Ultrabook™ system and Creative Cam in terms of definition and, especially, of instantaneous frame rate.


    On a Creative Cam, we can retrieve the Depth image simply by changing the pipeline constructor:

        Public Sub New()
            MyBase.New()
            EnableImage(PXCMImage.ColorFormat.COLOR_FORMAT_DEPTH, 320, 240)
        End Sub
    
    

    The following figure shows the depth image:

    The set of values for the PCXMImage.ColorFormat enumeration is available at http://software.intel.com/sites/landingpage/perceptual_computing/documentation/html/index.html?pxcimagecolorformat.html.

     

    Intel, the Intel logo, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.
    Copyright © 2013 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

  • Intel Perceptual Computing SDK
  • VB.NET
  • Microsoft Visual Studio* 2012
  • .NET Framework 4
  • Developers
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Windows*
  • .NET*
  • Beginner
  • Intel® Perceptual Computing SDK
  • Perceptual Computing
  • Development Tools
  • Sensors
  • Laptop
  • Server
  • Desktop
  • URL
  • Code Sample
  • Getting started
  • Stunned by Sensors – Starting my Ultrabook Training

    $
    0
    0

    As an avid fan of all things Star Trek*, I can’t help but hear Mr. Spock’s voice when I read about all these sensors we have watching and helping us in our lives today. Unfortunately as a developer, I usually feel like the guy in the red shirt on the away party when Spock says, “Sir, sensors indicate we are clear to send Mr. Duncan into the cave”. The cave in which I am then killed by the monster that extracts all the salt from my body. The real nightmare is that I’m the guy responsible for the monster detection code. Suffice it to say I have a lot of work to do if I’m going to save myself from this nightmare. Codeproject.com to the rescue. This site has held developer contests for the Intel® Ultrabook™ specific development. The Windows 8* & Ultrabook App Innovation Contest (link) challenged developers to create a touch-aware app that included “just go crazy with the sensors” in the list of acceptable entry criteria. I need to know more about these sensors and I’ve been told I’m crazy, so I’m checking these articles out.  I’ve started with Adam David Hill’s submission called, “Celerity: Sensory Overload” (link).  Mr. Hill not only created a rather compelling and functional game (YouTube* demo ), but he does a good job describing some of the challenges he faced in getting things working like head tracking with just a webcam.

    For me, I’m going to keep reading and start implementing my own code as soon as my Ultrabook arrives.  I do hope I can get my sensor education complete before the Captain sends me to the planet’s surface.

    Code Project Ultrabook page -
    http://www.codeproject.com/KB/ultrabooks/
    Windows 8* & Ultrabook App Innovation Contest -
    http://www.codeproject.com/Competitions/598/Windows-8-Ultrabook-Application-and-Article-Contes.aspx
    Celerity: Sensory Overload Submission –
    http://www.codeproject.com/Articles/471705/Celerity-Sensory-Overload
    Celerity Demo on YouTube –
    http://youtu.be/HhSCLk8jyd0

  • Tim Duncan
  • Ultrabooks
  • Touch and Sensors
  • app innovation contest
  • Icon Image: 

    Blast from My Past, How Meteor's Helped Me Understand Touch Sensors

    $
    0
    0

    After getting my Star Trek on while reading about writing code for the sensors on the Intel® Ultrabook at codeproject.com, I was pleased to see that Adrian Atkison had submitted an article titled, “Meteor Madness.” The game is reminiscent of way too many hours I spent on an Atari* avoiding and destroying astrological objects that can be found just outside Mars in our solar system. One of Mr. Atkison’s innovations is allowing both touch and classic keyboard control simultaneously during game play.  Since I have a historical feel for the game I found the explanations in this article particularly helpful in getting my new-to-sensors head around what one can do when going from traditional input devices to a touch input. Well done Adrian, I’m going to go blow up some space rocks.


    Meteor Madness Article –
    http://www.codeproject.com/Articles/480771/Meteor-Madness

    Gameplay Demo on Youtube –
    http://youtu.be/7PMhxBsmMLM 

    ‘Roid Rage Game on Windows 8 Store –
    http://apps.microsoft.com/windows/en-us/app/roid-rage/2d6287ff-e08f-4499-8245-d00899fd5824

     

  • Tim Duncan touch sensor codeproject contest
  • Icon Image: 

    Running Out of Gas? Intel® processor-based Ultrabook™ Sensors to the Rescue!

    $
    0
    0

    As I begin my 3rd article on what I’ve learned from about development for an Ultrabook™ system on codeproject.com, I feel I need to bring the subject closer to earth. I am so glad Dr. A. Bell submitted his article to the site called, “Road and Driving Pattern Analyzer using Ultrabook™.” I drive about 15,000 miles per year for commuting and vacations, some years more. One such vacation was a 6 week journey through the southwestern United States with my wife and dog.  Now if you’ve ever watched Roadrunner* cartoons you have a good idea where we were. We saw so many beautiful vistas and learned a little as well.  There are phenomenal landscapes and fascinating pre-history, but there's also a lot of time on the road looking at different shades of dirt.  For me, I spend a fair amount of the time playing with various calculations in my head like trip miles per hour including gas/potty/food stops or more importantly, is there enough gas in the tank to get to the next fueling station (a real concern when crossing Death Valley for instance).  Too bad I didn't have a device and application that approaches what Dr. Bell's analyzer does, I'd have never got bored.  Check out the references at the bottom of the article - they really help explain the possibilities of combining generally available information systems like web maps with Intel processor-based Ultrabook sensor specific data to create compelling and often even useful applications.  Oh well, now that I read about connecting the app to Microsoft* Bing* and my own Ultrabook system's GPS we just may have to bring the kids along on the next trek.

    Google* Maps Image of Our Route

    Road and Driving Pattern Analyzer using Ultrabook™ -
    http://www.codeproject.com/Articles/481804/Road-and-Driving-Pattern-Analyzer-using-Ultrabook

  • intel ultrabook tim duncan GPS maps
  • Icon Image: 

    Digital Storytelling with Augmented Reality - The Book is Just the Beginning

    $
    0
    0

    Download


    Digital Storytelling with Augmented Reality - The Book is Just the Beginning [PDF KB]

    Abstract


    Digital storytelling provides an immersive experience, combining the power of your Intel® Core™ processor-based platform and the printed page. The augmented farm application demonstrates augmented reality through digital storytelling using the Intel® Perceptual Computing SDK.

    Introduction


    Imagine sitting down with your child and their favorite storybook. Together, you not only see the characters, but choose where they appear on the page. You interact with a virtual world overlaid upon the physical book. Augmented reality combines a real-world environment with computer-generated graphics, sound and other elements. Augmented reality and the Intel® Perceptual Computing SDK are complementary technologies that, when combined, have the potential to create immersive, compelling experiences.

    Digital Storytelling – Augmented Farm


    With perceptual computing, users interact with their devices and the virtual world using voice, 3D depth tracking, and face/head movements. Digital storytelling emerged through the blending of these technologies, and allows readers to interact naturally with the augmented world. This model drives the creation of next generation media creation and consumption. By breaking the barriers between the traditional media types (e-books, video clip, games, and greeting cards), this form of entertainment has astounding possibility. This demo will inspire content providers by showing them new ways to consume their media.

    Virtual Exploration - When you show a page from the book to an application enabled with augmented reality, the page first needs to be recognized via a user-facing RGB and 3D depth camera.

    The image at top shows the reader holding the storybook page in front of the computer’s camera, starting the recognition and augmentation processes. The series below starts with the flat page, then as an augmented image after it is recognized and portrayed on-screen. The third image in the series shows SDK enabled object tracking (book rotation). The final image in this series shows the reader manipulating the animated characters with his hand.

    Flat page

    Augmented image of page
    Augmentation tracking page
    Interacting w/augmented page

    Character Interaction– Another feature in our demo allows the reader to interact with an on-page character. The series of images below show the page captured through the 3D camera and recognized by the object recognition capabilities included in the SDK. Then the page is tracked via additional SDK elements, allowing the super-imposed butterfly to be added within the book’s screen space along with the reader. We then see the SDK’s short-range gesture recognition functions combining with multiple image recognition elements and tracking when the butterfly flies from the page to the outstretched hand. In the final image, the reader continues to explore the book’s world although the book is out of view of the computer’s camera.

    Storybook page recognized
    Butterfly added to screen
    Butterfly flies to hand
    Book no longer required

    Conclusion


    Digital storytelling only scratches the surface of what can be accomplished using the Intel® Perceptual Computing SDK for fully immersive media interaction. Combining real world recognition (like the pages of the book) with other perceptual computing modes, such as close-range 3D hand tracking (used in the butterfly portion of this demo) opens up endless possibilities for media consumption. The book is just the beginning, get started using the Intel® Perceptual computing SDK to implement your own reality soon.

    Information on this SDK

    This demonstration was implemented using the Intel® Perceptual Computing SDK 2013. (http://intel.com/software/perceptual)

     

    Intel, the Intel logo, Ultrabook, and Core are trademarks of Intel Corporation in the US and/or other countries.
    Copyright © 2013 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

     

  • ultrabook
  • applications
  • Gesture Recognition
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Beginner
  • Perceptual Computing
  • Sensors
  • Laptop
  • Tablet
  • URL
  • Viewing all 82 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>