Quantcast
Channel: Sensors
Viewing all 82 articles
Browse latest View live

Intel Software marcando presença no TDC São Paulo!

$
0
0

Na próxima semana a Intel Software participará do TDC São Paulo, que vai acontecer entre os dias 10 e 14 de Julho na Universidade Anhembi Morumbi.

Em nosso stand você poderá conhecer melhor nossas tecnologias, as iniciativas do grupo de Software da Intel e nossas comunidades (Android, Servers/HPC, HTML5 e Ultrabooks/Windows).

Venha conhecer nossos Community Managers e traga idéias e sugestões para que possamos sempre trazer o melhor para você!

Durante o evento também iremos apresentar as seguintes palestras:

Dia 10/07:

Desenvolvendo Apps Multiplataforma para dispositivos móveis com HTML5 

Palestrante: Jomar Silva

Trilha: HTML5 e Javascript

Horário: 13:10 às 14:00

O desenvolvimento de aplicativos para dispositivos móveis multiplataforma é o grande desafio de hoje para que os desenvolvedores possam maximizar a audiência de seus Apps, minimizando o seu esforço para desenvolver e manter as aplicações. Um desafio adicional está relacionado com os ambientes de compilação destas aplicações, pois atualmente cada plataforma (iOS, Android, etc) demanda um ambiente específico para isso, portanto serviços na nuvem que realizam a compilação e empacotamento de aplicativos são fundamentais. Iremos abordar na palestra as APIs e serviços na nuvem existentes atualmente para auxiliar os desenvolvedores, além de apresentar exemplos de código de apps em HTML5 com acesso a recursos nativos (sensores) dos dispositivos móveis.

Dia 11/07

Afinal, o que é um Community Manager? 

Palestrantes: Luciano Palma e George Silva

Trilha: Marketing Digital

Horário:11:10 às 12:00

O cargo de Community Manager é bastante recente, e por isso suas fronteiras ainda estão me fase de definição. Venha para esta palestra para entender o que fazem os Community Managers da Intel em seu dia a dia, discutir este novo papel profissional e descobrir se este pode ser o seu próximo emprego. Principais abordagens: A importância das Comunidades para empresas - Perfil do Community Manager; - Técnicas, Ferramentas e Atividades do Community Manager; - Habilidades e conhecimentos aplicados no dia-a-dia por quem exerce este papel numa empresa líder de mercado.

Computação Manycore: uma arquitetura muito além do multicore! 

Palestrante:Luciano Palma

Trilha:HPC

Horário:14:10 às 15:00

O uso de múltiplos cores e múltiplos threads em paralelo é fundamental para a Computação de Alto Desempenho. Dentro deste contexto, a Intel criou um chip com uma arquitetura inovadora que expande os limites da computação e aloja até 61 núcleos de processamento em um único chip e disponibiliza até 244 threads simultâneas. Venha conhecer a arquitetura Manycore, que é utilizada no computador mais potente do mundo, o Tianhe-2. Nesta sessão, apresentaremos a arquitetura de um sistema equipado com o coprocessador Intel Xeon Phi e mostraremos como tirar proveito de máquinas com altíssimo poder de processamento através de paralelismo.

Dia 12/07

Escreva sua app Android sem gastar energia

Palestrante:George Silva

Trilha:Android

Horário:16:40 às 17:30

As restrições das plataformas móveis, a concorrência e a facilidade dos usuários em recomendar ou criticar as aplicações tornaram o desafio dos desenvolvedores muito maior nas lojas de aplicativos. Além das preocupações com arquitetura e um bom plano de testes, as restrições de bateria e desempenho menor que as plataformas tradicionas exigem mais criatividade para encontrar o melhor balanço entre responsividade e consumo de energia. Nosso foco será apresentar um guia prático de como criar software eficiente no uso de energia, discutir quando precisamos lidar com o balanço entre resposta ao usuário e uso de energia visitando as principais ferramentas que auxiliam o desenvolvedor a melhorar o uso de energia e a responsividade de sua aplicação. Venha conhecer o trabalho que a Intel Software está fazendo no ecossitema Android que beneficia o desenvolvimento para todos dispositivos.

Utilizando multi-touch e sensores com JavaFX e JNI 

Palestrante:Felipe Pedroso

Trilha:Java

Horário:10:10 às 11:00

Entenda os benefícios que a plataforma JavaFX traz no desenvolvimento de aplicações para Ultrabooks e quais são as novas formas de interação com esses dispositivos. Conheça também os desafios que o desenvolvedor enfrenta para utilização dos sensores presentes nesses dispositivos.

Dia 14/07:

Desafios na produção de games no Brasil 

Palestrantes:Juliano Alves e Mauricio Alegretti

Trilha:Games

Horário:17:40 às 18:30

Inovação na Experiência do Usuário: Apresentando o Intel Perceptual Computing SDK 

Palestrantes:Felipe Pedroso

Trilha:Games

Horário:A definir

Nos últimos anos a experiência proporcionada ao usuário vem se tornando um fator determinante para o sucesso das apps. A Intel apresenta o Intel Perceptual Computing SDK com um demo produzido pela parceira Smyowl. O SDK proporciona ao usuário uma interação mais natural, imersiva e intuitiva através de reconhecimento de voz, análise facial, reconhecimento de gestos com as mãos e dedos e rastreamento de objetos.

Esperamos você lá!

  • windows 8
  • html5
  • android
  • HPC
  • ultrabook
  • Icon Image: 


    Location Data Logger Design and Implementation: Introduction

    $
    0
    0

    Today I am beginning a multi-part blog series on the design and development of a location-based Windows* Store app. My goal is to provide developers with a complete, real-world example of creating a location-aware application on the ultrabook and tablet platforms. While an internet search will turn up several examples of how to use the geolocation sensor within a Windows Store app, they tend to be either simple code snippets with little to no discussion on how to integrate them into a larger or more complex app, or narrowly-focused examples that provide only rudimentary functionality.

    The application and how to get it

    The application I created, and which I'll be reviewing in this series, is called Location Data Logger. It creates position track logs from your Geolocator sensor and saves them out as CSV, GPX, and/or KML files. It turns your Windows 8 device into a position data logger, and is a very useful utility for recording your position over time. Track logs are used for everything from records of recreational travel to ground-truthing data in geographic databases. With a track log application you can save a log of a run, hike, bicycle ride, car trip, or static position measurements, and review it at a later date. You can plot the track log on a map, import the data into a Geographic Information System for analysis, or just share your data with others.

    I chose this application because it is complex enough to not be trivial, but simple enough to not be overly-complex. In short, it demonstrates how one can integrate geolocation capability into a fully-functioning Windows Store app without being a merely academic exercise, and it is small enough that it can be easily reviewed and discussed.

    Location Data Logger is written in C# and XAML, and the source code can be downloaded here on Developer Zone. You'll need the following in order to build and run the app:

    This app works best on systems that have an integrated GPS or GNSS receiver. If your system does not have an integrated GPS or GNSS, see the blog series “No GPS? No Problem! Using GPSDirect to develop location-aware apps” for information on using an external GPS as a Windows Geolocation sensor.

    In part 1, I'll start by describing the application requirements.

    Part 1: Application Design →
  • geolocation gps gnss location
  • Icon Image: 

    Location Data Logger Design and Implementation, Part 1: Application Design

    $
    0
    0

    This is part 1 of a series of blog posts on the design and implementation of the location-aware Windows Store app "Location Data Logger". Download the source code to Location Data Logger here.

    What is a Data Logger?

    Before descending into the details of the development of Location Data Logger, I want to spend some time on what, exactly, a data logger application is and why it's useful.

    The concept originates with GPS receivers, whether they be consumer, commercial, or military grade, and the idea that you can record a log of your position and movement over time. Consumer receivers refer to this as a track log, which consists of a series of track points that records the device's position (including altitude), heading, speed, and the time at some pre-defined interval. Depending on the receiver other information might be logged as well, such as data from supplemental sensors like air pressure and compass heading. The result is a precise position log that can be reviewed, analyzed, and processed at a later time.

    A hiker, for example, can save a track of his or her hike and then review it at home to determine the total mileage walked, total elevation gain, as well as plot the hike on a map. That track log can be exported to a data file and shared with others so that future hikers might benefit from the information. Such track logs function as rudimentary trail maps, giving other potential hikers valuable information about the location of trailheads and the route of the trails themselves, which is very useful when local maps are either incomplete or of questionable quality. At the commercial level, track logs are regularly used to ground-truth trails, roads, and other geographic features that are difficult to spot on or trace from aerial imagery and thus may not be accurately mapped.

    Other uses include creating logs of vehicle travel, such as by delivery and transportation companies to monitor the efficiency of routing, and by insurance companies to monitor driver habits (with the driver's permission, of course). Track logs are immensely useful, and dedicated GPS receivers with small form factors and minimal user interfaces, known as data loggers, are readily available in the market.

    Location Data Logger turns your Windows 8 device into a data logger.

    Requirements for the Location Data Logger app

    Given the above, the list of requirements for a data logger app is mercifully short and simple. At minimum, the app must:

    1. Track position
    2. Start and stop logging
    3. Record position logs to a data file

    As I said, that is a very short list. Practically speaking, however, the app needs to offer a bit more than this to provide a decent user experience, so I'll expand it to:

    1. Track position
    2. Start and stop logging
    3. Record position logs to one or more commonly-accepted data file formats
    4. Per-session logging
    5. Stay active while logging
    6. Filter by estimated accuracy

    The first two should be fairly obvious, but the last four probably require some explanation.

    Recording to common-accepted file formats

    A key function of a data logger is the ability to export and share the track logs, and that means choosing a file format that is accepted by other applications. In the GPS world, there are several standards for exchanging data points and Location Data Logger supports the three that are arguably the most common and most useful. They are:

    • CSV. The comma-separated value format is versatile because it does not have a fixed schema. Data is simply written out into a flat file that resembles a table, with one row of comma-separated values per data line.
    • GPX. The GPS Exchange format is an XML-based data format designed specifically for interchanging GPS data. It is an open standard first released in 2002 and has undergone some minor revisions since that time. GPX is widely supported by GIS systems and data converters.
    • KML. The Keyhole Markup Language is a newer XML-based format and also an open standard. Unlike GPX, KML files are not limited to just GPS logging, and can be used to describe arbitrary geographic elements such as points of interest/waypoints, lines, polygons, networks, overlays, and more, with styling information for each. Entire maps can be defined using just KML elements.

    These three file formats are almost universally supported by Geographic Information Systems, mapping applications, and GPS data converters. In Location Data Logger, the user can choose to log to one or more of these three formats simultaneously, allowing greatest flexibility in sharing their track logs.

    Per-session logging

    This feature is a matter of log file hygiene. When the user starts the data logger, a new log file is created. When they stop the logger, the log file is closed. This allows users to separate their track logs into different files. Location Data Logger goes a step further in automatically naming log files by date and time so that the user does not have to be prompted, and it eliminates log file collisions (in the event that the user starts and stops the logger rapidly, unique filenames are generated to guarantee this).

    Staying active while logging

    It would be undesirable for the logger to be interrupted in middle of an active session since the point of running a data logger is to log every data point. This means keeping Windows from going to sleep, suspending the app, or taking any other action might cause the app to stop running. Of course, this means that the data logger will consumer quite a bit of power, but that's the nature of this sort of application: you trade power savings for necessary functionality.

    Filtering by estimated accuracy

    The geolocation sensor in Windows 8 is not a single sensor but rather a collection of location inputs from multiple sources. The device's position can come from an IP address, Wi-Fi triangulation, or  a high-precisions source such as a Global Navigation Satellite System (e.g., GPS). More than likely, the user is most interested in position data coming from the latter, but rather than force this on them Location Data Logger gives the user a choice: one can filter out "low precision" data, meaning position reports that are not regularly coming from a high-precision source such as GPS. This filtering will be discussed in a future installment of this series.

    Additional features

    In addition to the base requirements, I added two features to Location Data Logger to improve the overall user experience.

    Map view

    Location Data Logger displays the user's current position on a map provided by the Bing mapping service along with an accuracy circle calculated from the estimated horizontal accuracy. While the online map does require an active internet connection to update, it is not a necessary component and the data logger will function without it (albeit without live maps). This gives the user something to look at and review during operation.

    Data view

    The user can bring up a tabular view of the track points logged to the current log files. This allows a review of the track points directly and in a user-friendly manner, without having to open up the log files in a separate text viewer.

    Next: User interface design

    In Part 2, I'll discuss the design of the user interface.

    ← IntroductionPart 2: User Interface →
  • geolocation gps gnss
  • Icon Image: 

    NFC Usage in Windows* Store Apps – a Healthcare App Case Study

    $
    0
    0

    Download


    NFC Usage in Windows* Store Apps – a Healthcare App Case Study [PDF 526.46 KB]

    Abstract


    Modern mobile apps take advantage of a myriad of sensor types available on the platform. NFC is one such feature that is becoming increasingly popular, as it is very versatile and allows several types of use cases. In this article we will look at how a sample healthcare app uses NFC to enhance the user experience and enable new usage models. By the end of this article, you will learn how to add NFC usage to Windows* Store apps. Specifically, we will cover how to do protocol activation, how to automatically open your app when a user taps a custom-programmed NFC tag, and how to use a NFC tag as a check-in/check-out mechanism in a hypothetical patient room.

    Contents


    Overview
    NFC and protocol activation in Windows Store Apps
    A Healthcare Line of Business Windows Store App
    Adding NFC to a sample healthcare app – a case study
               Adding Proximity Capability to the app Manifest
               Adding Protocol Activation Extension to the app Manifest
               Handling Protocol activation inside the app
               Detecting NFC availability and Subscribing for message(s)
               Reading and parsing NFC tag information
    Summary

    Overview


    Near Field Communication (NFC) enables short-range wireless connectivity with data speeds of 106, 212, or 424 kbps and requires the device(s) to be in close proximity (e.g.: less than 4 cm). The connection is quick, simple, and automatic. In addition, the connection requires little configuration on the user’s part, unlike other connectivity technologies like Bluetooth. This renders it extremely convenient for different use cases.

    At a higher level, NFC usage can be divided into three use cases: acquiring Information (e.g.: read URI from NFC tag), exchanging information (e.g.: send/receive photo), and connecting devices (e.g.: tap device to configure Bluetooth or other connection configuration). These three categories together can enable a plethora of NFC use cases. For an in-depth discussion on NFC technology, please refer to this article:
    http://www.radio-electronics.com/info/wireless/nfc/near-field-communications-tutorial.php

    NFC usage is getting more popular by the day. Most of the new generation mobile hardware supports NFC. In this article, we will discuss how NFC enables new user experiences in a sample healthcare app. We will focus on how to use NFC tags for automatically activating (opening) our sample healthcare app and how to use the information embedded in the tag to identify and take additional steps.

    NFC and Protocol Activation in Windows Store Apps


    Windows Store apps can use NFC functionality via the Windows.Networking.Proximity namespace. The proximity namespace classes allow a device to discover another device nearby, and publish/subscribe messages between them[?].

    The reference for the Windows Store apps proximity namespace is given below.
    http://msdn.microsoft.com/EN-US/library/windows/apps/windows.networking.proximity

    Before we can use the proximity APIs, we first need to declare the ‘proximity’ capability in the app manifest, which allows Windows to enforce security and user permissions for an app. Additionally, the app must be running in the foreground for it to be able to use proximity related APIs. In the following sections we will walk through these steps as part of a healthcare sample case study. For a detailed reference on NFC in Windows Store apps, please refer to the following article.
    http://msdn.microsoft.com/EN-US/library/windows/apps/hh465221

    We will also take advantage of another feature in Windows Store apps—protocol activation. Protocol activation allows us to register our app to be activated for a particular URI scheme. We can even define our own custom URI scheme that our app registers for. This URI can be fed to the device from anywhere—including a tap of NFC tag, which we will use in our case study. Please refer to the reference below for more details on protocol activation in Windows Store apps.
    http://msdn.microsoft.com/library/windows/apps/hh779670.aspx

    A Healthcare Windows Store App


    As seen in several other articles in this forum, we will build the case study around a healthcare Line of Business Windows Store app. We will extend it with the capability to do protocol activation and use NFC tags to implement a sample patient room check-in/check-out mechanism.

    Some of the previous articles include:

    The application allows the user to login to the system, view the list of patients (Figure 1), and access patient medical records, profiles, doctor’s notes, lab test results, and vital graphs.


    Figure 1.The “Patients” page of the Healthcare Line of Business app provides a list of all patients. Selecting an individual patient provides access to the patient’s medical records.

    Adding NFC to a Sample Healthcare App – a Case Study


    In this sample app, we can use NFC tags for uniquely identifying patient or lab room(s) that the logged-in user (healthcare provider or doctor) visits. When the user is about to enter the room, he/she can tap on the NFC tag at the entrance. If our sample app is not in the foreground, Windows will automatically activate it (via protocol activation), thereby bringing it to the foreground. We could additionally navigate to the appropriate screen inside the app depending on the reason for app activation (in this case NFC). Next, the app can read the room number embedded inside the NFC tag and log the timestamp and room details for check-in. When the user is about to leave the room, he/she will tap again and the app will log the check-out details. All results will be summarized on the user’s home screen.

    Before we start adding these features to our sample app, first we need to modify the app manifest.

    Adding Proximity Capability to the app Manifest

    Double clicking on the Package.appxmanifest in your Visual Studio* 2012 project should bring up a manifest UI allowing you to tweak the settings. Figure 2 shows the Proximity capability enabled in the manifest, which lets us use the NFC feature.


    Figure 2. App manifest showing the Proximity package capability (captured from Visual Studio* 2012)

    Our project should now be ready to access the NFC feature.

    Adding Protocol Activation Extension to the app Manifest

    We also want our sample healthcare app to be activated (open the app, bringing it to the foreground) whenever we tap a NFC tag. This is very useful for enhancing the user experience since the NFC-based proximity namespace classes only work when our app is in the foreground. To achieve this feature, we will need to enable protocol activation.

    In the app manifest window, click on “Declarations.” Here we can add a new “protocol” declaration for our sample app.  We can define our own custom URI scheme; in our sample app we use ‘prapp’ as the URI that we register for. You can choose the custom URI depending on your app requirements. Please refer to screen shot in Figure 3.


    Figure 3.Protocol declaration for sample app (captured from Visual Studio* 2012)

    Our sample app should now be ready for NFC and protocol activation features.

    Handling Protocol activation inside the app

    Our sample app will get invoked automatically every time a URI with the ‘prapp’ scheme is triggered on the device. In our case study, we custom program a NFC tag with a URI scheme of the format ‘prapp://rm=????’ where “????” is any patient or lab room number. When a user taps on this NFC tag, Windows automatically reads in the URI, notices the ‘prapp’ URI scheme, and triggers activation for our sample app. The activation ‘kind’ is ‘Protocol’. To handle the app activation, we need to override ‘OnActivated’ method in our Application class. Please refer to the code listing below.

    // protocol activation
            protected async override void OnActivated(IActivatedEventArgs args)
            {
                if (args.Kind == ActivationKind.Protocol)
                {
                    ProtocolActivatedEventArgs protocolArgs = args as ProtocolActivatedEventArgs;
    
                    await InitMainPage(args);
    
                    var frame = Window.Current.Content as Frame;
    
                    var loggedinuser = SessionSettingsViewModel.SessionSettings.Loginuser;
                    if (loggedinuser == null || !loggedinuser.LoginState)
                    {
                        frame.Navigate(typeof(Login));
                        return;
                    }
                    if (frame.CurrentSourcePageType != typeof(UserPage)) frame.Navigate(typeof(UserPage));
     

    Figure 4. Handling the app activation for custom URI  protocol scheme that we registered for ++

    Our sample app gets activated only when our specified URI scheme is triggered. Inside the ‘OnActivated’ method we also check for the kind of activation. If it is ‘Protocol,’ we can proceed to redirect the user to the appropriate UI screen.

    In this case study, we first check if the user is already logged into the app. If the user is not logged in, we redirect the user to the login page. If the user is already logged in, we redirect the user to his/her home screen where he/she can see the NFC room information summarized. Depending on the app requirements, additional checks and verifications can be performed at this step.

    Detecting NFC availability and subscribing for message(s)

    Using the Windows Runtime proximity namespace classes, we can detect the presence of NFC capability on the device. If NFC capability is present, we can subscribe for specific types of messages. We can use the ProximityDevice.GetDefault static method for the default instance of NFC on the device.

    In our case study, we subscribe for messages of type WindowsUri. Please refer to the article linked below for different message types supported.
    http://msdn.microsoft.com/EN-US/library/windows/apps/hh701129

    Below is a sample code snippet from the case study.

    protected override void OnWindowCreated(WindowCreatedEventArgs args)
            {  
                var proxmityDev = ProximityDevice.GetDefault();
                if (proxmityDev != null)
                {                
                    nfcmsgsubid = ProximityDevice.GetDefault().SubscribeForMessage("WindowsUri", MessageReceivedHandler);
                    System.Diagnostics.Debug.WriteLine("prxDev found and registered");
                }
            }
    
            private void MessageReceivedHandler(ProximityDevice sender, ProximityMessage message)
            {
                PRAppUtil.ShowMsg("URI: " + message.DataAsString);
            }
    

    Figure 5.Sample code snippet for NFC check and subscribe message ++

    Additional steps can be performed in the Message handler, depending on your app requirements.

    Reading and parsing NFC tag information

    Since we encoded the patient or lab room information as part of our custom URI (on NFC tag) itself, we can parse the URI to decode the room number and log the check-in details. When our sample app gets activated, the full absolute URI that was obtained from the NFC tag is passed on to the OnActivated method call via the Uri.AbsoluteUri property of the args.

    The code snippet below parses the room information and updates the user view model with check-in and check-out details.

    protected async override void OnActivated(IActivatedEventArgs args)
            {
                if (args.Kind == ActivationKind.Protocol)
                {
                    ProtocolActivatedEventArgs protocolArgs = args as ProtocolActivatedEventArgs;
    
                    await InitMainPage(args);
    
                    var frame = Window.Current.Content as Frame;
    
                    var loggedinuser = SessionSettingsViewModel.SessionSettings.Loginuser;
                    if (loggedinuser == null || !loggedinuser.LoginState)
                    {
                        frame.Navigate(typeof(Login));
                        return;
                    }
                    if (frame.CurrentSourcePageType != typeof(UserPage)) frame.Navigate(typeof(UserPage));
    
                    int rm = Convert.ToInt32(protocolArgs.Uri.AbsoluteUri.Split('=')[1]);
                    if (ru.rm != 0 && ru.rm != rm) ru.rm = 0; 
                    if (ru.rm == 0)
                    {
                        ru.rm = rm;
                        ru.cin = DateTime.Now;
                    }
                    else
                    {
                        ru.cout = DateTime.Now;
                        RoomUsageViewModel.AddRoomUsage(ru.rm, ru.cin, ru.cout);
                        ru.rm = 0;                    
                    }
                    
                }
            }
    

    Figure 6.Sample code for parsing the NFC URI for room information and updating user view model ++

    In our case study, we have a separate user home page where all the room check-in/check-out information is displayed. It automatically gets updated every time the user taps the room identifier NFC tag with our custom URI scheme. Please refer to the screen shot in Figure 7 for additional details.


    Figure 7.Default camera UI dialog (captured from Windows* 8)

    Summary


    NFC enables several types of use cases. In this article, we discussed how a sample healthcare app incorporates NFC into its workflow to provide an enhanced user experience and enable new types of usage models. We discussed how you can easily add protocol activation and simple NFC usage to your Windows Store apps.

    Intel, the Intel logo are trademarks of Intel Corporation in the US and/or other countries.
    Copyright © 2013 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

     

  • ultrabook
  • store
  • applications
  • Notebooks
  • near field communication
  • Apps
  • NFC
  • Intel AppUp® Developers
  • Microsoft Windows* 8
  • Microsoft Windows* 8 Style UI
  • Sensors
  • Laptop
  • Tablet
  • URL
  • Privacy in the Connected World - Protecting Sensor Data at Research at Intel Days 2013

    $
    0
    0
    English
    Privacy in the Connected World - Protecting Sensor Data at Research at Intel Days 2013

    Intel researcher Ken Grewal presents a research sensor security prototype. Sensors such as cameras, microphones and position locating solutions are targeted for malicious usages including compromising identity, passwords, and financial information.  Ken shows a research prototype which combines Intel hardware and software to protect sensor data.  To learn more about Research at Intel 2013.

  • Developers
  • Sensors
  • Intelligent Shelving at Research at Intel Days 2013

    $
    0
    0
    English
    Intelligent Shelving at Research at Intel Days 2013

    Demonstrating Shelf Edge Technology, Intel pursues new technical avenues to help with everyday life. The use of Intelligent Shelving helps consumers quickly identify products to which they may have a food allergy. They can quickly query which product goes with other products or which products are part of a certain product family. To learn more about Research at Intel 2013.

  • Geolocation
  • Sensors
  • Consumer
  • intelligent saving
  • Research at Intel
  • Intel Labs
  • RAID
  • thomas birch
  • Evan Lang of Identity Mine Discusses Their Air Hockey Game for The All-in-One PC

    Intel Perceptual Computing SDK 2013 get GOLD release 2

    $
    0
    0

    In these days our loved Perceptual SDK receive an update in the SDK incarnation, there's a lot of improvement and new implementation.

    The release is marked with the version number 8779, just from the change log we can note the news.

    First of all the Java app dev support and Projection in framework porting libraries support was added,

    Many of the samples (either C++ and C#) was rewritten or heavily modified.

    For example the Gesture_Viewer_Simple (in C#) was rewritten to be GUI based, now it use a WinForm to choose options, enable recognition and show graphical output of the gesture recognized; it lost the console interface approach in favor of an event/task  based one.

    Now this sample can be really used as base to start real application on the platform.

    Same destiny for the Voice_recognition and Voice_Synthesis sample, completely rewritten as a GUI app and UNICODE support for the text.

    Good sample for face detection and landmark tracking in C# is Face_analysis also in a simple GUI interface.

    A new tutorial was written in order to support the Perc SDK on Havok Vision SDK (C++ game development platform).

    I find also that the reactivity in general was improved, I'm particularly involved in gesture recognition and I note good improvement in this part.

    On the voice_recognition sample is possible to change the source of audio sampling, between the PC standard, the array of the camera and Depth sens; I noted a big difference using each source. Using the mic of my ultrabook I can't recognize anything, with the Depth sense I have a a recog of about 50% and finally with the microphone array a good 80-90% of words were recognized correctly.

    I've recompiled my test app with the new SDK and I generally see good enhancement in speed and reliability.

    Good work guys!

  • Intel Perceptual Computing SDK
  • Icon Image: 


    Location Data Logger Design and Implementation, Part 2: User Interface

    $
    0
    0

    This is part 2 of a series of blog posts on the design and implementation of the location-aware Windows Store app "Location Data Logger". Download the source code to Location Data Logger here.

    The Main Page

    Location Data Logger is a fairly simple application, so a single page is sufficient for its user interface. The operational controls such as the start/stop button and precision filter are placed in a sidebar for fast and easy access. The main content area holds either the map display or the grid display, and the user can toggle between them. This approach allows the content view to expand to fill the available screen space without having to know the display resolution. Finally, a lower app bar allows the user to set configuration items.

    The screenshot, below, shows how the page is divided up. The top row is from Microsoft's default template which leaves the first 140 pixels clear of main content. It is below that where the layout gets more interesting. The lower grid consists of two columns: a 320 pixel sidebar, and the main content area.

    The width of the sidebar was not chosen arbitrarily. When a Windows Store app is running in a snapped view, it is assigned 320 pixels on the screen, no matter which side it was placed on. Location Data Logger is designed to fit cleanly in the snapped view by placing its primary operational controls and status information in this sidebar. Thus, the user still has complete control over the app's operation, as well as a useful feedback on its progress.

    The XAML for the grid layout is:

    <Grid Style="{StaticResource LayoutRootStyle}">
    
           <Grid.RowDefinitions>
                  <RowDefinition Height="140"/>
                  <RowDefinition Height="*"/>
           </Grid.RowDefinitions>
           <Grid Grid.Row="1" Margin="0,0,0,0">
                  <Grid.ColumnDefinitions>
                         <ColumnDefinition Width="320"/>
                         <ColumnDefinition Width="*" />
                  </Grid.ColumnDefinitions>
           </Grid>
    </Grid>

    The bottom app bar is where less frequently needed configuration items are kept, such as the selected log formats and the folder picker for setting the log file directory.

    <common:LayoutAwarePage.BottomAppBar>
           <AppBar>
                  <Grid>
                         <Grid.ColumnDefinitions>
                               <ColumnDefinition Width="Auto"/>
                               <ColumnDefinition Width="*"/>
                         </Grid.ColumnDefinitions>
                         <Button Grid.Column="1" x:Name="buttonSaveDir" HorizontalAlignment="Right" VerticalAlignment="Bottom" Style="{StaticResource FolderAppBarButtonStyle}" Click="buttonSaveDir_Click"/>
                         <StackPanel x:Name="panelLogOptions" Grid.Column="0" Orientation="Horizontal">
                               <TextBlock Text="Log as:" VerticalAlignment="Center" Style="{StaticResource ItemTextStyle}" Margin="0,0,0,10"/>
                               <ToggleButton x:Name="toggleCSV" VerticalAlignment="Center"  Content="CSV" Margin="10,0,0,0" Click="toggleCSV_Click" IsChecked="True"/>
                               <ToggleButton x:Name="toggleKML" VerticalAlignment="Center"  Content="KML" Margin="10,0,0,0" Click="toggleKML_Click"/>
                               <ToggleButton x:Name="toggleGPX" VerticalAlignment="Center"  Content="GPX" Margin="10,0,0,0" Click="toggleGPX_Click"/>
                         </StackPanel>
                  </Grid>
           </AppBar>
    </common:LayoutAwarePage.BottomAppBar>

    First Time Execution

    When the application is launched for the very first time it is possible that the user will start the logger without first configuring the app, and that means that there must be reasonable default options. However, there is one important configuration option that cannot be set automatically, and that is the folder where the log files will be written.

    The user's main document library is the obvious choice for a default, and it is tempting to try and hardcode it by using KnownFolders.DocumentsLibrary, but this requires that the Documents Library capability be declared in the manifest. The problem is that Microsoft has placed severe restrictions on its use, specifically:

    The only acceptable use for the documentsLibrary capability is to support the opening of embedded content within another document.

    The use of this capability is subject to Store policy, and a Windows Store app may not be approved if used improperly, so an alternate solution is necessary. What I choose to do is present a folder picker to the user if no log directory has been set at the time that they start the logger.

    From MainPage.xaml.cs, in toggleStartStop_Click():

    // Make sure we have a log directory/folder set
    
    if (logger.GetFolder() == null)
    {
           Boolean prompt_for_folder = true;
    
           // Check to see if we defined a folder in a previous session.
    
           try
           {
                  String logdirToken = StorageApplicationPermissions.FutureAccessList.Entries.First().Token;
                  StorageFolder folder = await StorageApplicationPermissions.FutureAccessList.GetFolderAsync(logdirToken);
                  if (folder != null)
                  {
                         logger.SetFolder(folder);
                         prompt_for_folder = false;
                  }                   
           }
    
           catch
           { }
    
           // Prompt the user to choose a folder if one hasn't been set previously.
           if ( prompt_for_folder ) await set_logdir();
     
           if (logger.GetFolder() == null)
           {
                  // The user has not set a folder, of the previously set location no longer exists.
     
                  toggleStartStop.IsChecked = false; // Uncheck the toggle button.
                  return;
           }
    }

    This code works by storing the most recent log directory as the only item on the FutureAccessList.

    • If the user has never set a log directory, then the FutureAccessList is empty and we set prompt_for_folder to true.
    • If they have set a log directory, then it simply takes the first (and only) directory token on the list and uses it to find the log directory. If the log directory does not exist, e.g. if it has been deleted, then set prompt_for_folder to true. Otherwise, set prompt_for_folder to false

    Transitions and Animations

    Microsoft encourages the use of animation to give a Windows Store app a fluid look, and to call out attention to significant items which change on screen. Location Data Logger incorporates animations in two key areas.

    Transition between the map view and the data point view

    The map and the data points are actually displayed in the same grid cell and the Collapsed property is used to determine which one is visible. When the map view is active, it's Collapsed property is set to Visible and the data point display is set to Collapsed, and visa-versa. A smooth crossfade between the two is done via storyboard animation on the Opacity property:

    <Grid.Resources>
           <Storyboard x:Name="mapFadeIn">
                  <DoubleAnimation From="0" To="1" Duration="0:0:0.25" Storyboard.TargetName="gridMap" Storyboard.TargetProperty="Opacity"/>
           </Storyboard>
           <Storyboard x:Name="mapFadeOut" Completed="mapFadeOut_Completed">
                  <DoubleAnimation From="1" To="0" Duration="0:0:0.25" Storyboard.TargetName="gridMap" Storyboard.TargetProperty="Opacity"/>                       
           </Storyboard>
           <Storyboard x:Name="pointsFadeIn">
                  <DoubleAnimation From="0" To="1" Duration="0:0:0.25" Storyboard.TargetName="gridData" Storyboard.TargetProperty="Opacity"/>
           </Storyboard>
           <Storyboard x:Name="pointsFadeOut" Completed="pointsFadeOut_Completed">
                  <DoubleAnimation From="1" To="0" Duration="0:0:0.25" Storyboard.TargetName="gridData" Storyboard.TargetProperty="Opacity"/>
           </Storyboard>
    </Grid.Resources>

    And some code to set the Collapsed property to Visible or Collapsed as appropriate.

    private void buttonMap_Click(object sender, RoutedEventArgs e)
    {
           buttonMap.Style = (Style)Resources["ActiveItemTextButtonStyle"];
           buttonPoints.Style = (Style)Resources["InactiveItemTextButtonStyle"];
           if (gridMap.Opacity < 1)
           {
                  mapFadeIn.Begin();
                  gridMap.Visibility = Windows.UI.Xaml.Visibility.Visible;
           }
           if ( gridData.Opacity > 0 ) pointsFadeOut.Begin();
    }
     
    private void buttonPoints_Click(object sender, RoutedEventArgs e)
    {
           buttonMap.Style = (Style)Resources["InactiveItemTextButtonStyle"];
           buttonPoints.Style = (Style)Resources["ActiveItemTextButtonStyle"];
           if ( gridMap.Opacity > 0 ) mapFadeOut.Begin();
           if (gridData.Opacity < 1)
           {
                  pointsFadeIn.Begin();
                  gridData.Visibility = Windows.UI.Xaml.Visibility.Visible;
           }
    }
     
    private void mapFadeOut_Completed(object sender, object e)
    {
           gridMap.Visibility = Windows.UI.Xaml.Visibility.Collapsed;
    }
     
    private void pointsFadeOut_Completed(object sender, object e)
    {
           gridData.Visibility = Windows.UI.Xaml.Visibility.Collapsed;
    }

    When the user switches from, say, the map view to the data point view, the data point grid is made Visible, and its Opacity is animated from 0.0 to 1.0. Simultaneously, the Opacity of the map is animated from 1.0 to 0.0. When the animation finishes, it is Collapsed.

    Some of you may be wondering why I set the Collapsed property after that animation completes. The answer is because the two controls (the map and the data point grid) are overlaid on top of one another, and this can cause problems with mouse and touch events reaching the correct control. It's not enough to simply change the opacity: to make sure that UI events are applied to the visible control, the invisible control needs to be explicitly collapsed. A control with an Opacity of 0.0 is still an active control that can receive UI events.

    Starting and stopping operation

    When the logger is running, the status area displays a text box indicating the name of the log file and where it is being stored. It also serves as an indicator that the logger is active. Rather than just appear and disappear when the logger is started and stopped, however, storyboard animation is used to do a gradual fade in and fade out. This is very similar to the procedure above, only it is not necessary to collapse the UI element at the end. If the user hits "Reset" while the logger is running, a much faster fadeout occurs to draw attention to the fact that the log file has changed.

    <StackPanel.Resources>
           <Storyboard x:Name="textLoggingInfoFadeIn">
                  <DoubleAnimation From="0" To="1" Duration="0:0:0.5" Storyboard.TargetName="textLoggingInfo" Storyboard.TargetProperty="Opacity"/>
           </Storyboard>
           <Storyboard x:Name="textLoggingInfoFadeOut">
                  <DoubleAnimation From="1" To="0" Duration="0:0:0.5" Storyboard.TargetName="textLoggingInfo" Storyboard.TargetProperty="Opacity"/>
           </Storyboard>
           <Storyboard x:Name="textLoggingInfoBlinkOut">
                  <DoubleAnimation From="1" To="0" Duration="0:0:0.125" Storyboard.TargetName="textLoggingInfo" Storyboard.TargetProperty="Opacity"/>
           </Storyboard>
    </StackPanel.Resources>

    Next Up: The DataLogger Class

     In Part 3, I'll dive into the code for the core of the application: the DataLogger object and the Geolocation sensor.

    ← Part1: Application DesignPart 3: The DataLogger Class →
  • geolocation gps gnss location
  • Icon Image: 

    Location Data Logger Design and Implementation, Part 3: Geolocation and the DataLogger class

    $
    0
    0

    This is part 2 of a series of blog posts on the design and implementation of the location-aware Windows Store app "Location Data Logger". Download the source code to Location Data Logger here.

    The DataLogger Class

    At the heart of Location Data Logger is the DataLogger object which is responsible for obtaining location reports from the Windows 8 Geolocation sensor and sending that information to the various other components in the application. All of this is implemented within the DataLogger class.

    Location Data Logger is a relatively simple application and it only has one display page, and I could easily have implemented the geolocation funcitonality into the MainPage class. I chose to go with a separate class for two reasons:

    1. For anything more than trivial applications, it is good application design to compartmentalize your objects. Rather than have MainPage be some mega-class that implements everything from geolocation to writing the data files, I broke the application out into functional components.
    2. Future-proofing. If I decide to add a second display page or state to the application, the code is already capable of supporting that.

    Since this object coordinates all of the activities within the application, it needs to be able to communicate with the relevant Windows 8 sensors as well as the objects that are responsible for writing the data logs. Some of its private class members include:

    Geolocator geo;
    SimpleOrientationSensor sorient;
    ExportCSV eCSV;
    ExportGPX eGPX;
    ExportKML eKML;

    Initialization

    When the DataLogger object is created, considerable initialization takes place.

    public DataLogger()
    {
           lastupdate = new DateTime(1900, 1, 1);
           hp_source = hp_tracking = running= false;
           position_delegate = null;
           status_delegate = null;
           logCSV = logGPX = false;
           logKML = true;
    
           geo = null
     
           sorient = SimpleOrientationSensor.GetDefault();
    
           folder = null;
           eGPX= new ExportGPX();
           eCSV = new ExportCSV();
           eKML = new ExportKML();
    }

    The Geolocator object is initialized inside of the Resume() method, which is called from MainPage when the application is ready to start or resume tracking the device’s position (though not necessarily logging).

    public void Resume()
    {
           geo = new Geolocator();
           geo.DesiredAccuracy = PositionAccuracy.High;
           geo.MovementThreshold = 0;
    
           geo.StatusChanged += new TypedEventHandler<Geolocator, StatusChangedEventArgs>(geo_StatusChanged);
           geo.PositionChanged += new TypedEventHandler<Geolocator, PositionChangedEventArgs>(geo_PositionChanged);
    }

    Though all of this work takes place in two separate places, I’ll discuss them as a whole.

    The geolocator sensor is initialized, and immediately configured with a MovementThreshold of 0 and a DesiredAccuracy of High. Most, if not all, GPS receivers calculate their position once per second, and the goal of the application is to record every position report received even when the position has not changed. These settings ensure we receive reports from the location device as the are reported, and prevent the Windows Sensor API from filtering some out.

    Event handlers for the Geolocator‘s PositionChanged and StatusChanged events are also installed, a topic that I cover in detail below.

    While I initialize a SimpleOrientation sensor, I do not create an event handler for it. This is because the data logger records the device’s orientation at the time a position update comes in, not when the orientation changes. This means an event handler is not only unnecessary, but unwanted.

    Why include the SimpleOrientation sensor at all, though? It’s certainly not necessary for geolocation. The answer is because this information might be useful to a device manufacturer. A device’s antenna design can have a significant affect on the reception quality of radio signals, and reception can be orientation-sensitive.

    Also note that I set two variables, hp_source and hp_tracking, to false, and initialize lastupdate to a time in the distant past (Jan 1st, 1900). These variables are used to internally determine and track 1) whether or not we have a high precision data source, and 2) if the user has asked to log only high precision data. Essentially what is happening here is that I assume the location data is not high-precision until proved otherwise.

    The call from MainPage.xaml.cs that gets everything started looks like this:

    public MainPage()
    {
          this.InitializeComponent();
    
          …
    
          logger = new DataLogger();
          logger.SetCallbackStatusChanged(update_status);
          logger.SetCallbackPositionChanged(update_position);
          logger.Resume();
    
          …
    }

    (The SetCallback* functions are explained below.)

    Identifying high-precision geolocation data

    The Location API in the Windows 8 Runtime abstracts the location source from the developer (and, in turn, the user). As explained in my blog “The WinRT Location API: Where did my location data come from?”, the geolocation sensor is actually a merging of multiple inputs, some specialized hardware devices such as GPS/GNSS receivers (if present), and some software sources such as WiFi triangulation. The API does not provide the developer with a means of explicitly determining where a location report originated. The best you can do is make an educated guess based on the reported accuracy and other characteristics of the position reports.

    The DataLogger class looks at a combination of two factors: the update rate, and the Accuracy reported in the Geocoordinate object. This is done inside the log_position() method, which is called from the geo_PositionChanged() event handler:

    TimeSpan deltat;
     
    deltat = c.Timestamp - lastupdate;
     
    // Do we have high-precision location data?
     
    if (deltat.TotalSeconds <= 3 && c.Accuracy <= 30) hp_source = true;
    else hp_source = false;

    I somewhat arbitrarily choose a reporting interval of 3 seconds as the threshold, as some consumer GPS devices may update once a second but send position reports via their NMEA output stream every two seconds (this is to accommodate people using external GPS devices as a sensor via GPSDirect). The accuracy of 30 meters was also somewhat arbitrary: consumer GPS accuracy is typically on the order of a few meters, and car navigation systems can provide reasonable guidance with only 30 meters of accuracy.

    Geolocation Events and Delegates

    The DataLogger class implements event handlers for the PositionChanged and StatusChanged events so that the logger object can record the positions as they come in, as well as keep track of the status of the Geolocator sensor. One problem, though, is that the UI display needs to be updated as well, and so those events also need to reach the MainPage object. There are two options for accomplishing this:

    1. Have the MainPage object also register event handlers with the Geolocator object for the PositionChanged and StatusChanged events.
    2. Use delegates in the DataLogger object to call the appropriate methods in the MainPage class when PositionChanged and StatusChanged events arrive.

    Both methods have their advantages and disadvantages. I went with second object because it limited the amount of redundant code, and also allowed me to pass additional information in the delegate that is not part of the PositionChanged event.

    The callbacks are defined in the DataLogger class:

    public delegate void Position_Changed_Delegate (Geocoordinate c, Boolean logged);
    public delegate void Status_Changed_Delegate (PositionStatus s);
     
    public class DataLogger
    {
           Position_Changed_Delegate position_delegate;
           Status_Changed_Delegate status_delegate;
     
           …
     
           public void SetCallbackPositionChanged (Position_Changed_Delegate p)
           {
                  position_delegate= p;
           }
     
           public void SetCallbackStatusChanged(Status_Changed_Delegate s)
           {
                  status_delegate = s;
           }
     
           …
    }

    And registered in MainPage when that object is initialized:

    logger.SetCallbackStatusChanged(update_status);
    logger.SetCallbackPositionChanged(update_position);

    The extra information passed in the Position_Changed_Delegate is whether or not the last received trackpoint was logged by the DataLogger object. This allows me to update the UI display with not only the device’s current position, but also with the number of data points that have been logged to one of our data files (and, as we’ll see later on, whether or not to add it to the visible breadcrumb trail in the map view). This would be difficult to accomplish if the MainPage object registered a PositionChanged event directly as it would need to then query the DataLogger object to get this extra information. This could potentially present a race condition if two PositionChanged events arrived in rapid succession.

    ← Part 2: User Interface
  • geolocation gps gnss location
  • Icon Image: 

    Puppet in Motion Case Study

    $
    0
    0

    By Geoff Arnold

    Download


    Puppet in Motion Case Study [PDF 1.16MB]

    Sixense Puts Puppets in Perceptual Motion


    Danny Woodall (creative director) and Ali Diaz (CTO) of Sixense Studios have been through their share of tough user acceptance testing. After all, for years they’ve been blazing a trail in the market for motion tracking technologies, where ideas about what’s possible too often come from the movies rather than the real world. (For example, think of the gesture interface Tom Cruise used in the 2002 film “Minority Report.”)

    Woodall and Diaz led the Sixense Entertainment team that developed the grand-prize-winning app in the seven-week Intel® Ultimate Coder Challenge: Going Perceptual contest, which wrapped up in April. Sixense’s virtual puppeteering app Puppet In Motion bested entries from six other developer teams in the challenge’s main criteria: to build groundbreaking apps that would make the most of the perceptual computing capabilities enabled by the latest Ultrabook™ device platforms. But before they could submit the app to the Intel judges, they had to satisfy a more finicky set of critics—their kids.

    “My daughter is five,” said Woodall, the creative director at Sixense Studios, the business unit within Sixense Studios that is focused on software development. “She sees me with my hand raised in the air and I’m driving these little wolf and pig puppets on the screen and she gets ecstatic; she jumps in my lap and wants to do it too.”

    Indeed, if the work of Sixense and the other entrants is any indication, children growing up today will increasingly expect perceptual computing to be the norm rather than a novel way to interact with computing technology.

    The virtual puppets in the winning app come from the iconic Three Little Pigs fairy tale.


    Figure 1.Sixense created scenes at surfer pig’s beachfront grass hut, lumberjack pig’s log house, and suburban pig’s practical bungalow.

    Train Your Users Well


    Woodall and Diaz say the key to delivering a good perceptual computing experience is to focus on the user and not on the capabilities of the various new and evolving sensor technologies available in the hardware. Applying this approach to their contest app meant spending time on features that weren’t immediately part of gameplay but nonetheless were essential to the overall experience. Perhaps the best example of this is the brief setup and calibration that users walk through upon starting the app.

    When you Launch Puppet In Motion and before you start acting out the scene between the surfing pig and the hungry wolf, a darkened stage appears. The red curtains rise and there are the pig and wolf, each standing in a pool of light.

    You raise your hand and a virtual image of it appears, along with on-screen text that instructs you how to place your hand in the correct position to operate the virtual puppets. This calibration is a key element and contributes to a positive first impression, which so often sets the tone for a user’s overall impression of an app.


    Figure 2.Hand calibration at launch of the Puppet In Motion app.

    The calibration step was critical in teaching users how to interact with the gesture recognition in a way that establishes a set of conventions for the game. When told they’re about try out a virtual puppet, most users first tip their hand forward toward the screen, much as they would if they had their hands in actual sock puppets. The problem is that with the hand in this position, tracking where fingers are relative to each other and to the palm is extraordinarily difficult. In contrast, the camera has no problem “seeing” the data to bring the puppets to life if the user’s hand is more or less flat to the camera. To operate the puppets’ mouths, the user simply moves his thumbs: up (adjacent to the palm, pointing in the same direction as the four fingers) for closed; down (away from the palm, at a right angle to the four fingers) for open.


    Figure 3.Hand position to close the puppet’s mouth.

    At first Diaz didn’t readily accept this sort of limiting hand position. After many days of coding and testing he actually was able to get the camera to recognize a standard “pinching” hand position to move a puppet’s mouth. (Again, think sock puppets.)

    He experimented with an algorithm that essentially assumed that the closest points to the camera were the fingertips. Then, after tinkering with the 2D depth buffer, the algorithm looked for separation between these points to estimate the top and bottom of the hand. Ultimately, Diaz got the code to the point where users could control the puppets' mouths with their hands in a “natural” position. The C# Unity code in its entirety can be found in the Ultimate Coder Challenge: Going Perceptual week 4 contest blog post.

    However, all the compensation and extrapolation of the incoming camera data spoiled much of the rest of the puppets' motion, including moving about in three dimensions in the on-screen environment or changing the direction the puppet is looking. (Some of the drama obviously drains away if the wolf is looking in another direction as it’s threatening the pig.)

    Ultimately, requiring that users hold their hands upright, palms flat to the camera, was the right technical solution. The initial mandatory calibration helped ensure this approach was right for users—and was among the more innovative features of the app. The evolution of perceptual computing will depend in part on the establishment of conventions for controlling a computer and navigating apps. Gesture and facial recognition will eventually need to be as well understood as using a mouse to control a cursor and click and drag objects, or pinching and swiping on a touch screen. Indeed, the calibration won raves according to the comments from the contest judges excerpted on the Intel Ultimate Coder Challenge: Going Perceptual contest page. One judge exclaimed, “Keep it simple and educate users about perceptual computing in the first 2 seconds. This is the perceptual computing poster boy team!” Another judge wrote, “Best feedback, best gesture recognition...+1 for the awesome UI feedback on hand position + calibration step.”

     public void UpdateRotation ()
    {
    	if ( PuppetJoint == null ) return;
    	
    	Quaternion baseQuat = new Quaternion();
    	
    	Vector3 forwards 	= _playerController.ControllerObject.transform.forward;
    	Vector3 up 			= _playerController.ControllerObject.transform.up;
    	Vector3 right 		= _playerController.ControllerObject.transform.right;
    
    	float roll = 0.0f, pitch = 0.0f, yaw = 0.0f;
    	
    	if ( ( InputBind == PXCInputType.ORIENTATION ) && _playerController.ControllerSettings.UseHandOpenness )
    	{	
    		Vector3 lookDir = new Vector3( _playerController.SmoothHandNormal.x,
    									   _playerController.SmoothHandNormal.z, 
    									  -_playerController.SmoothHandNormal.y);
    		
    		Quaternion lookQuat = Quaternion.LookRotation( lookDir );
    		
    		Vector3 eulerAngles = lookQuat.eulerAngles;
    		
    		roll 	= ClampAngles( eulerAngles.z, Constraints.MinRoll, Constraints.MaxRoll );
    		pitch 	= ClampAngles( eulerAngles.x, Constraints.MinPitch, Constraints.MaxPitch );		 
    		yaw 	= ClampAngles( eulerAngles.y, Constraints.MinYaw, Constraints.MaxYaw );
    	}
    	
    	else if ( InputBind == PXCInputType.OPENNESS )
    	{
    		// handle openness press 
    		float handOpenness = _playerController.ControllerSettings.UseHandOpenness ?
    							 _playerController.UseAdjustedThumbOpenness ?
    							 _playerController.SmoothThumbOpenness :
    							 _playerController.SmoothHandOpenness :
    							 _playerController.SmoothPinchOpenness;
    		
    		float inputVal = ( handOpenness / 100.0f ) * RotationMultiplier;
    		
    		roll 	= Mathf.Clamp(inputVal, Constraints.MinRoll, Constraints.MaxRoll);
    		pitch 	= Mathf.Clamp(inputVal, Constraints.MinPitch, Constraints.MaxPitch);
    		yaw 	= Mathf.Clamp(inputVal, Constraints.MinYaw, Constraints.MaxYaw);
    	}
     

    Figure 4.Screen shot of the PXCPuppetJointController code sample, which demonstrates how Sixense was able to control orientation and openness to a puppet joint (bone).

    Use the SDK Plug-in to Speed Learning of the Unity Game Engine


    One of the key decisions Woodall and Diaz faced was which game engine to use. Both had ample experience in AAA game development from their previous positions at Sega Studios.

    However, after considering a few different engines, they decided to use Unity*, an engine they had never worked with before, in part because the Unity plug-in is included with the Intel Perceptual Computing SDK.


    Figure 5.The basic components of the Puppet In Motion app.

     "When we first fired up Unity, it was sort of eerie how familiar the interface and editor felt. It was just like the previous engine we worked with when we were at Sega. I mean, so much so that we were joking around, saying 'Wow, did somebody leave Sega and go over there and recreate this?'

    “It's amazing to see what people have accomplished within a night's work within Unity. Given the time constraints of the contest, it was pretty clear to us that this was the way to go,” he continued. “It’s such a great engine to get your ideas prototyped and running quickly. Everything is so compartmentalized. You can attach scripts to game objects, and they’re so contained that with little effort, you’re up and running.”

    The Intel plug-in gave Diaz a head start in using the Unity engine by exposing lots of Unity’s functionality in an easy-to-use interface. But getting up to speed with a new engine and using the plug-in presented challenges.

    Diaz experienced occasional system crashes when using the Unity editor in the Intel Perceptual Computing SDK. And he found that at least some of the features in Unity’s C++ API weren’t accessible through the Intel plug-in.

     using UnityEngine;
    using System.Collections;
    
    public class PXCManager : MonoBehaviour 
    {
    	public static PXCUPipeline PXCUPipelineInstance = new PXCUPipeline();
    	public static Texture2D PXCImage = null;
    	
    	private static bool m_bPXCUPipelineInitialized = false;
    	private static int[] m_PXCImageSize = new int[2];
    	
    	void Start()
    	{
    		Debug.Log( g_bPrimaryIsRight ? "Primary is Right" : "Primary is Left" );
    		
    		if ( !m_bPXCUPipelineInitialized )
    		{
    			if ( !PXCUPipelineInstance.Init( m_PXCUPipelineMode ) )
    			{
    				print( "Unable to initialize the PXCUPipelinen" );
    				return;
    			}
    			else
    			{
    				m_bPXCUPipelineInitialized = true;
    				
    				if ( !PXCUPipelineInstance.SetVoiceCommands( m_PXCVoiceCmds ) )
    				{
    					print( "Failed adding voice commands" );
    				}
    			}
    		}
    		
    		if ( !PXCImage && PXCUPipelineInstance.QueryLabelMapSize( m_PXCImageSize ) )
    		{
    		PXCImage = new Texture2D( m_PXCImageSize[0], m_PXCImageSize[1], TextureFormat.RGB24, false );
    		}
    	}
    	
    	void OnApplicationQuit()
    	{
    		PXCUPipelineInstance.Close();
    		PXCUPipelineInstance = null;
    		PXCImage = null;
    	}
     

    Figure 6.A screenshot of access to the Unity* engine via the Intel® Perceptual Computing SDK plug-in.

    Diaz and his colleagues fed all this information back to Intel, which set up the contest in the first place to encourage innovation around perceptual computing—among thought leaders and industry early adopters and within Intel itself. The contest, which was launched when the Intel Perceptual ComputingSDK was still in beta, was marked by an open-source-like level of collaboration and sharing; contestants blogged weekly about their experiences with the Intel Perceptual ComputingSDK, including issues that arose and solutions that worked to surmount them. Most of the contestants had the opportunity to meet face-to-face at the Game Developers Conference 2013 and borrowed freely from each other to collectively improve the resulting apps that the judges eventually considered.  

    The likely end result of all this sharing: improvement of the entire Intel Perceptual Computing SDK ecosystem over the long term, an outcome made all the more likely since Intel is using contest feedback such as Diaz’s to improve the Intel Perceptual Computing SDK technology going forward.

    For Rapid Prototyping, Separate Artists and Coders ASAP


    Diaz and Woodall were joined by three other Sixense colleagues in building the app. Designer chip Sbrogna set up the various scenes and oversaw the overall story arc. Art Director Dan Paullus and artist AJ Sondossi fleshed out the puppets—note surfer pig’s floppy ears and expressive eyes—and the rest of the app’s visual assets. Together this team represented more than a quarter of Sixense’s software development staff, though at most they worked only half-time on the contest app because of other projects and deadlines.


    Figure 7.From left to right: Alejandro Diaz, Dan Paulus, Danny Woodall, and chip Sbrogna.

    While Diaz and Woodall worked on the puppet controller system, the artists got started creating their models and the associated rigs, textures, and shaders. There was much communication upfront as the developers figured out precisely what parts of a virtual puppet could be animated by their puppet controller. Swiveling the head and moving the mouth to speak, yes; crouching or running, no.

    However, once the main decisions about the puppet controller were made, the artists were free to work more or less independently. Woodall says the goal was to set up a production flow so that early on, the artists could start bagging all the bones they were interested in attaching to the controller. Bringing a puppet to life was simply a matter of importing it into the code base, dragging it into a scene, and attaching a script to it.


    Figure 8.Wolf puppet character rig.

    In addition to the puppet controller, the calibration system, and a fairly simple UI, Diaz created a director system that gave the designer and the artists the ability to set up the individual scenes and the locations of the imaginary camera capturing the action. The system also helped in transitions from one scene to the next, ensuring that each puppet was always where the user’s hand was relative to the camera as the pig and wolf moved from beach to forest to suburbia.

    Diaz also incorporated AVPro Movie Capture*, a Unity plug-in available on the Unity Asset Store, to make it easy to record gameplay directly as AVI files. This is a common-sense feature in a world where the ability to post and share video from gameplay is expected.

    using UnityEngine;
    using System.Collections;
    
    public class PuppetShowDirector : MonoBehaviour 
    {
    	public PuppetSet[] 	Sets 		= new PuppetSet[1];
    	public int 			ActiveSet 	= 1;
    	public bool 		EnableDebug = false;
    	
    	private int 		m_currentSet = 0;
    	
    	void Start ()
    	{
    		if ( ActiveSet <= Sets.Length )
    		{
    			m_currentSet = ActiveSet - 1;
    		}
    		Sets[m_currentSet].Init(this);
    	}
    	
    	void Update () 
    	{
    		if ( m_currentSet != ActiveSet - 1 )
    		{
    			if ( ActiveSet <= Sets.Length && ActiveSet > 0 )
    			{
    				if ( Sets[m_currentSet].SetGameObject )
    				{
    					// disable and hide the current set
    					Sets[m_currentSet].SetGameObject.SetActive( false );
    				}
    				
    				m_currentSet = ActiveSet - 1;
    				
    				if ( Sets[m_currentSet].SetGameObject )
    				{
    					// enable and show the new active set
    					Sets[m_currentSet].SetGameObject.SetActive( true );
    				}
    			}
    			else
    			{
    				ActiveSet = m_currentSet + 1;
    			}
    			
    			// initialize the new set
    			Sets[m_currentSet].Init(this);
    		}
    		
    		if ( m_currentSet <= Sets.Length )
    		{
    			// update the current set
    			Sets[m_currentSet].Update();
    		}
    	}
    

    Figure 9.A screenshot of the code that handles character transitions between each set.

    Prepare for Perceptual Computing, the Future of Human-Computer Interaction


    With high expectations for Sixense’s future, both in regard to Puppet In Motion and the rest of its activities, Woodall and Diaz have several ideas for improving the winning app and perhaps even releasing a version to the public, with help from Intel.

    There’s a laundry list of features to add. At the top of the list: make it easier for a single user to put on a show with two puppets. For now, for the wolf to make good on his threat to a pig’s house or even to transition between scenes, a user has to either tap the touch screen or keyboard.

    Woodall say the app’s basic technology could be used in several ways. One is creating content for children, including educational material. (What is Sesame Street if not teaching with puppets?) Then there are the unexpected use cases. For example, Woodall says that occasionally the Sixense team convenes on Skype*, sharing their desktops and interacting with each other as puppets.

    “It’s interesting what happens in this sort of ‘sand-boxy’ environment,” he said. “You drop your guard and don't really care about being silly and self-conscious anymore; you become much more creative.”

    All of which points to the fact that the world of perceptual computing, the one Intel is helping to create with the Intel Perceptual Computing SDK and a host of other activities, is fast approaching. Indeed, the leading edge is already here.

    Developers looking for guidance might do well to follow the example of the three little pigs. As you’ll recall, the first two pigs, who built flimsy houses of straw and sticks, didn’t fare well. Only the third pig, with his stout house of bricks, prevailed against the wolf’s bluster.

    So go on and build an app, one that’s sturdy enough to stand up to users who blunder about as they learn to move beyond the confines of the desktop GUI. Rely on industry-leading tools and hardware, especially from Intel. And most of all, take seriously the idea that new ways of controlling computers beyond keyboards and mice and even touch screens are fast moving from movies to niche applications to the mainstream.

    Resources


    Intel does not make any representations or warranties whatsoever regarding quality, reliability, functionality, or compatibility of third-party vendors and their devices. For optimization information, see software.Intel.com/en-us/articles/optimization-notice/. All products, dates, and plans are based on current expectations and subject to change without notice. Intel, the Intel logo, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. Copyright © 2013. Intel Corporation. All rights reserved.

  • ultimate coder challenge
  • ultrabook
  • Tablet
  • applications
  • Notebooks
  • Perceptual Computing
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Unity
  • Perceptual Computing
  • Sensors
  • Laptop
  • URL
  • Meshcentral.com - Server Issues & News Update

    $
    0
    0

    Quick post to update everyone on a few things:

    • Meshcentral.com has been having outages. A few hours 2 days ago and a few hours yesterday both times during the night. The server is actualy fine but administrators of my server room are doing some work on the network. I did get a few mails about this and forwarded the concerns to the server room administrators. Hopefully it will not happen again.
    • Mesh agent v1.71. I just release a new version of the agent to fix a problems with the agent running on some machines. Version 1.70 had a new wireless scanning feature and made use of "wlanapi.dll" which I assumed all versions of Windows had, even back to Windows XP. Well, the was not correct, some version of Windows Server don't come with wireless services installed and the new agent failed to run. Agent v1.71 has dynamic bindings to this DLL, so if the DLL is not present, it's ok, it will still run but with WiFi scanning disabled.
    • Intel Developer Forum. It's this time of year again, IDF 2013 will be in San Francisco September 10 to 12th. I will be a speaker this time around with one session and two labs. My topic is connecting Intel Platforms to the cloud, I will have a great time using a much hardware features as I can possibly use to make Intel computers work with the cloud. More blogs on this to come.

    That is it for now,
    Ylian
    meshcentral.com

  • Mesh
  • MeshCentral
  • MeshCentral.com
  • p2p
  • database
  • IDF
  • intel developer forum
  • IDF2013
  • Ylian
  • Icon Image: 

    Intel Highlights für Entwickler bei der Developer Week 2013 in Nürnberg

    $
    0
    0

    Die jährliche Developer Week 2013 Konferenz von der Neue Mediengesellschaft Ulm mbH fand vom 24. – 26.Juni 2013 in Nürnberg statt. Bei dieser Konferenz handelt es sich sogar um eine Sammlung unterschiedler Konferenzen. So trifft man auf die Developer Week (kurz DWX) die Web Developer Conference (WDC), Mobile Developer Conference (MDC) und die .NET Developer Conference (DDC) in einer Woche und an einem Ort. Das ist gerade hilfreich für Entwickler, die sich oft mehr als für einen Themenbereich interessieren und deswegen immer wieder im Konflikt stehen, welche Konferenz sie besuchen sollen.


    Kurz vor der Eröffnung der DWX 2013 Konferenz

    Wir vom Intel Developer Zone Team waren als Austeller dabei. In unserer Intel Developer Lounge stellten wir passend zum Konzept der Konferenz auch für jeden Themenbereich einen Entwickler-Trend vor. Diese Trends sind eine Kombination aus unseren Hardware- und Softwarelösungen. Einen Wow-Effekt lösten bereits die Softwarelösungen aus. Intel ist schließlich bekannt für gute Hardware, aber was haben wir mit Software für Entwickler zu tun? Genau das möchte ich hier mit ein paar Beispielen wiederholt demonstrieren:

    Perceptual Computing

    Steht im Deutschen für Wahrnehmungs-Computing und zeigt eine neue Art mit Computer agieren zu können. Dabei sind typische Eingabemedien wie die Tastatur, Maus oder Touchscreen von Gestern. Hier wird Software über Sprache und natürlichen Handbewegungen in der Luft gesteuert. Mit Hilfe einer 3D Kamera von Creative und dem Intel® Perceptual Computing SDK 2013, kann man via C++ oder auch mit C# diese neuen Möglichkeiten in die eigene Software implementieren.


    Tanja und Tina beim Spielen ihres Lieblingspiels Kung Pow Kevin

    Neu ist das Konzept nicht. Lange spricht man bereits von Argument Reality, das für erweiterte Realität steht. Aus Bildern von einer Videokamera, soll Software interaktiv eine Lösung bieten. Mit der Spielekonsole Xbox 360 schaffte Microsoft einen Durchbruch mit der ersten 3D Kamera Microsoft Kinect.


    Auch Christoph fuchtelte wild in der Luft herum

    Dennoch bietet derzeit das kostenfreie Intel® Perceptual Computing SDK 2013 eine innovative Lösung gegenüber bekannter Lösungen. Das SDK ist explizit für einzelne Fingergesten konzeptioniert. Womit zum Beispiel fertige Events für „Daumen hoch“, „Daumen runter“ oder dem bekannten „Peace zeichen aus zwei Fingern“ erkannt werden. Es geht noch weiter, eine integrierte Gesichtserkennung ermittelt die Stimmung, ob jemand lacht oder traurig ist. Aber nicht nur das, sondern auch welchem Geschlecht die aktuelle Person entspricht.

    Das kostenfreie Intel® Perceptual Computing SDK 2013 gibt es hier zum Download:
    http://software.intel.com/en-us/vcsource/tools/perceptual-computing-sdk

    Weitere Informationen folgen demnächst mit einem separaten Blog-Post meinerseits. 

    Intel´s App Framework - Eine App für jede Plattform

    Mit nur einem Source-Code, eine App für jede Plattform erstellen. Dieser Traum wurde bereits mit dem Mobile Development Framework PhoneGap ermöglicht. Die Entwicklung findet nur einmalig mit HTML5 und JavaScript statt, der Rest wird automatisch in der Cloud erzeugt.


    Mal was ganz anderes: Uli spielte einen Song passend zu seinem HTML5 Developer Tools Vortrag

    Intel´s App Framework ist quasi eine Erweiterung von PhoneGap und stellt noch viele weitere nützliche Komponenten zur Verfügung. Ein besonderes Highlight ist die Entwicklungsumgebung XDK. Die vollständig vom Browser aus als Web-Anwendung zur Verfügung steht. Das erspart mühevolles Installieren und kann jederzeit von jedem Gerät verwendet werden.


    Intel® XDK

    Ein weiteres persönliches Highlight gegenüber dem klassischen PhoneGap ist, dass Intel kostenfrei das Erstellen der App für jede Plattform in der Cloud erzeugt. Auch das App Framework + XDK bleibt kostenfrei. Was will man als Entwickler mehr?

    Weitere Informationen:
    http://html5dev-software.intel.com

    Auch hier folgen in nächster Zeit noch einige How-To´s meinerseits.

    Tablets, Ultrabooks und Ultrabook Convertibles

    Ultrabook ist ein Konzept von Intel für besonders kleine und leichte Notebooks mit Intel-Prozessoren. Um den Namen tragen zu dürfen, müssen die Geräte eine Reihe von Spezifikationen erfüllen. Dazu gehören eine hohe Akkulaufzeit, eine akzeptable Leistung und Tablet-Computer-ähnliche Eigenschaften wie ein schnelles Aufwachen aus dem Standby. Ein besonderes Merkmal sind ein MultiTouch-Screen und weitere zahlreiche Sensoren wie GPS, NFC, Accelerometer, Magnetometer, Gyrometer, und einen Ambient Light Sensor.


    Tablets, Ultrabooks und Ultrabooks Convertibles

    Einen Schritt weiter gehen Ultrabook Convertibles. Diese Ultrabooks sind Hybrid-Geräte und lassen sich jederzeit sekundenschnell zu einem Tablet verwandeln. Der Start von Windows 8 war gleichzeitig auch der Start dieser neuen Hardware-Generation und namenhafte Hersteller wie Dell, Lenovo, Asus und Toshiba veröffentlichten ihre Vision davon. So bietet jeder seinen eigenen Mechanismus zur Konvertierung an. Die zwei beliebtesten sind das Lenovo IdeaPad Yoga 13 und das Dell XPS 12.


    Acer Iconia W510 Tablets

    Somit sind diese beiden Gerätetypen ideale Kandidaten, wenn es um die App Entwicklung für Tablets und mobile Geräte geht. Auf dem folgenden Video zeige ich, wie einfach auf die Sensoren mit der neuen Windows Runtime zugegriffen wird:

    Mein Tipp: Windows 8 - App-Entwicklung für UltraBook Sensoren mit WinRT - Developer Garden TechTalk

    Smartphones mit ATOM Power

    Der legendäre Intel ATOM-Prozessor, der sich durch seine extrem lange Akkulaufzeit für Netbooks und Tablets bewährt hat, gibt es auch für Smartphones. Das ermöglicht zudem ein schnelles laden der Webseiten und Anwendungen. Auch das Wechseln von App zu App, geschieht flüssig und schnell. So wie man es eben von leistungsstarken Computern gewohnt ist.


    Hier zeigt Florian die Smartphones mit Intel® Technik

    Jedoch gibt es weitere interessante Intel Schmankerl. Das erste russische Android-Smartphone MegaFon Mint ermöglicht eine Full-HD-Wiedergabe von Videos im 1080p-Format. Die 8-MP-HD Kamera nimmt circa 10 Aufnahmen innerhalb 1 Sekunde auf. Damit wird garantiert kein kurzlebiger Augenblick verpasst. Das Beste kommt allerdings noch: Der Wireless-Display-Support. Damit werden Inhalte per Funk auf einen geeigneten HD-Fernseher mit entsprechendem Adapter übertragen. So kann man als Entwickler die eigene App ganz einfach professionell auf einem großen Bildschirm präsentieren.


    Ein Rennspiel was auf dem Smartphone läuft kann bequem via Mini-HDMI-Ausgang vom Bildschirm aus gespielt werden

    Mit einer solchen Vielzahl an Technologien für Smartphones, wird die App-Entwicklung zum Hochgenuss.

    Mein Tipp: Intel Smartphone Reference Demo

    Fazit

    Für mich persönlich war es spannend zu erfahren, welche Geschichte sich hinter jedem einzelnen Entwickler verbirgt. Die Konferenz hat angefangen mit einer jungen Dame, die mit Hilfe von Kinect Software für den Medizinbereich entwickelt. Die Herausforderungen der täglichen Spieleentwicklung brachten mich schon fast zum Schwitzen. Mein Mitgefühl erhielten die Web-Entwickler, die immer noch unter Kompatibilitätsproblemen der unterschiedlichen Browser mit HTML5 leiden.

    Spontan durfte ich für zwei ausgefallene Sessions Vorträge halten. So erzählte ich über "Entwickeln für Sensoren unter Windows 8" und "Softwarearchitektur mit dem Beispiel GoFish".



    Ein weiteres Highlight war unsere große Party bei der Teilnehmer die unseren Stand besuchten exklusiven Zutritt erhielten. Für die Musik sorgte Uli, der auch privat auf großen Bühnen unterwegs ist.

    Auf jeden Fall hat es uns großen Spaß gemacht für jeden etwas dabei zu haben und ich freue mich jetzt schon auf das nächste Jahr, wenn es heißt DWX 2014!

    Icon Image: 

    Infrared5 Case Study

    $
    0
    0

    By William Van Winkle

    Download Article


    Infrared5 Case Study [PDF 534 KB]

    Introduction


    “Cats are incredibly effective hunters and are wiping out our native birds.”
    - Gareth Morgan

    In 2012, Gareth Morgan became mildly famous in New Zealand for drawing attention to the plight of native flightless birds and how domestic animals, particularly housecats, are decimating the kiwi population. Who could have guessed that the ensuing feline firestorm would inspire an award-winning advance in gaming and perceptual computing?

    Backing up a few years, Chris Allen was an experienced Boston programmer. His wife Rebecca had business savvy and design experience from the print world. They started a small coding house calledInfrared5 to develop applications for client companies. The Allens steadily grew Infrared5 and cultivated a keen appreciation forGoogle’s 20-percent time philosophy. Applying this philosophy, an Infrared5 employee got to twiddling with creating an Apple iPhone*-based control system for remote control helicopters. This eventually becameBrass Monkey*, an SDK that allows Google Android* and Apple iOS* devices to serve as input controllers for Unity-based, Flash, HTML5 and native games.

    In early 2013, Intel invited Infrared5 to compete in itsUltimate Coder Challenge: Going Perceptual, a groundbreaking contest that provided participants with an Ultrabook™ device, Creative Interactive Gesture Camera development kits, a still-evolving perceptual computing SDK, and all of the support possible for letting their imaginations run rampant. With its focus on sensor-based input, Brass Monkey technology seemed a natural complement to perceptual computing—but how to meld the two?

    In finding an answer, Infrared5 devised a wholly new dual-input system, blending Wi-Fi* handheld devices with perceptual computing with a proof of concept that may change gaming forever.

    Kiwi Katapult Revenge*: Form and Function


    “Early on, I had the idea of using a head tracking mechanic and combining it with the phone as a game controller,” said Chris Allen. “In my head, it was this post-apocalyptic driving, shooting game, something like Mad Max* in 3D. But Rebecca and [art director] Aaron Artessa had been wanting to do a paper cutout concept for a long time. Then we heard an NPR newscast about New Zealand and how the kiwi birds were getting killed by domesticated cats. We thought it would be fun do a little political play and have you be a bird named Karl Kiwi able to fly around, firing lasers from your eyes, breathing fire, and taking revenge on the cats.”

    Those accustomed to WASD keyboard controls or traditional console gamepads may find Infrared5's gameplay a bit daunting at first. Brass Monkey uses accelerometer data from the Wi-Fi connected phone to control flying movement. Screen taps on the phone, typically with thumbs, control firing.


    Figure 1. Infrared5’s Brass Monkey software allows iOS* and Android* phone devices to serve as controllers along with Creative’s gesture camera.

    Face tracking using the gesture camera governs aiming, plus there's also voice input.Karl can shoot flames from his mouth when the player shouts “aaahhhh!” or “fiiiirrree!”

    That may seem like a lot for someone to juggle, but feedback from early players indicates that the gameplay was surprisingly natural after a little coaching and getting properly positioned in front of the camera. (Rebecca Allen noted that an in-game tutorial and calibration will drop the learning time to only a couple of minutes.) Head turns give the natural ability to peer around objects. The whole experience is remarkably intuitive. Still, over weeks of refining the interface and mechanics, the six-person development team found itself making several major changes.


    Figure 2. Your flying kiwi isn’t the only one equipped with laser beam eyes. Note the fairly simple UI and graphics palette for faster processing.

    “One of the problems we faced with the perceptual computing was with face tracking,” said Rebecca Allen. “You had to identify that people were actually controlling the view of the world with their face. We ended up doing a rearview mirror where you could actually see yourself and how you’re moving. Your face actually changes the perception, with the bird’s head moving as well. That also gave us the ability to see what’s behind the bird, because you’d be getting just slaughtered by cats from behind and not realize what was going on and who was shooting at you.”

    Challenges Addressed During Development

    Not surprisingly, Chris Allen had no experience with computer vision when he and Infrared5 started Intel’s contest. He admits having read a book on the subject a couple of years prior, but with no hands-on expertise, the team faced a steep learning curve in the opening weeks.

    Infrared5 designers were particular about the kind of lighting and atmosphere they wanted in Kiwi Katapult Revenge. However, the impressive visualization and multiple input streams placed a significant processing load on the little Lenovo IdeaPad* Yoga convertible Ultrabook device Intel provided to Infrared5 for the contest. To help keep the user experience fluid and fun, Team Kiwi took several resource-saving steps, including the following:

    • Dropped the face tracking frame rate. Since people in the image field were likely not to move much across several frames, Infrared5 found it could perform analysis less often and save on processing load.
    • Optimized the process threading, leveraging the Ultrabook device’s quad-core CPU to offload certain tasks to available cores and load-balance more effectively.
    • Pared down the color and visual assets. This change saved graphical resources and helped reduce the data load hammering the GPU core, while having little effect on the player experience.
    • Filtered out any faces beyond a distance of one meter (3.28 feet). The camera’s depth sensor made this possible, and by eliminating so many extra faces in crowded environments, the face processing load dropped.


    Figure 3.Currently, perceptual computing requires the addition of a USB gesture computer, but future Ultrabook™ device generations will likely integrate stereoscopic cameras directly.

    Given Infrared5’s experience in working with Unity, it seemed sensible to use the Unity SDK for Kiwi Katapult Revenge and write the code in C#. Team members knew the SDK included a head tracking mechanism and so expected to “get face tracking for free.” However, it turned out the results simply weren’t responsive enough to feel realistic; aiming and shooting times were skewed too far. The team burned up two weeks figuring this out. Finally, they decided to take depth data from the SDK and combine it with a C library called OpenCV. Because programmers couldn’t get enough low-level access, they switched to developing entirely in C Environment and used a DLL for communication to Unity, which is a popular game development environment.

    To resolve the head tracking responsiveness issue, Infrared5 devised a matrix map algorithm based on the camera’s position that stretched the optics so that closer objects appeared bigger. Because there was very little code publicly available for doing this, the programmer had to read everything available on the subject, including academic papers and two books on OpenCV, and then write the routine to Unity in C# from scratch. The team ran into issues with the C# port of OpenCV in Unity and finally ended up rewriting it in C++. Infrared5 plans to make this new code open source to help foster the perceptual gaming community.

    Despite warnings to the contrary from Intel specialists, Infrared5 went into the Ultimate Coder Challenge: Going Perceptual Challenge thinking that they could conquer gaze tracking. At least within the contest’s seven weeks, they were left disappointed.

    “We were reaching for robust feature tracking to detect if the mouth was open or closed, the orientation of the head, and the gaze direction right from the pupils,” said Infrared5 on its blog. “All three of the above share the same quality that makes them difficult: in order for specific feature tracking to work with the robustness of a controller in real time, you need to be confident that you are locked onto each feature as the user moves around in front of the camera. We have learned that finding track-able points and tracking them from frame to frame does not enable you to lock onto the targeted feature points that you would need to do something like gaze tracking. As the user moves around, the points slide around. Active Appearance Model may help us overcome this later.”

    Like all of the other contestants, Infrared5 worked with the Intel® Perceptual Computing SDK while it was still in beta, which meant that programmers encountered the inevitable gaps and bumps. This is to be expected with any new technology, and Infrared5 took the tool in the manner in which it was intended. As the company posted on its synopsis post for the third challenge week, “They [Intel] are trying out a lot of things at once and not getting lost in the specifics of just a few features. This allows developers to tell them what they want to do with it without spending tremendous effort on features that wouldn’t even be used. The lack of decent head, gaze, and eye tracking is what’s inspired us on to eventually release our tracking code as open source. Our hope is that future developers can leverage our work on these features.” Infrared5 would like to continue working with Intel to advance the SDK, possibly with its code merged into the Intel Perceptual Computing SDK.

    A fair bit has been written about how the Ultimate Coder Challenge: Going Perceptual Challenge contestants cooperated with one another, lending encouragement and sharing tools. Less has been noted about the same sort of relationship existing between Intel and the contest participants. Intel worked hand-in-hand with the contestants, helping them through their issues, and observing their needs and priorities. Participants emerged knowledgeable and skilled with perceptual computing—traits that can in turn be immediately applied to new products ahead of their competition.

    Lessons Learned, Advice Given


    Infrared5 tied with Lee Bamber for winning the Best Blog category in the Ultimate Coder Challenge: Going Perceptual Challenge. As seen in the few examples cited here, the Infrared5 crew went to great lengths in documenting their progress and sharing their wisdom with the broader community. Naturally, some things never made it into the blog, and Infrared5 wants to make sure that readers know of several key points as they progress into the world of perceptual computing.

    First, people are not used to controlling software with their head. While some elements of the Kiwi Katapult Revenge mechanic are quite natural, many users found that head control and the dual-input paradigm require a two-minute tutorial—a tutorial the team wasn’t able to create during the contest. Originally, Infrared5 tied the up-and-down movement to head control, but this resulted in players instinctively performing squats while trying to fly, which wasn’t quite the desired experience (although it could be in an exercise app!). They removed the feature and found alternatives.


    Figure 4. Infrared5 showcased Kiwi Katapult Revenge at the annual Game Developers Conference in San Francisco. Enthusiastic response was the key in helping to fine-tune face tracking and accelerometer inputs.

    “Don’t get caught up in your expectations,” advised Chris Allen. “Say you expect to get full-on eye tracking working. That could’ve totally stopped us. Working within constraints is a really important thing to always do, even though you maybe aren’t hitting everything you want. Sometimes through those limitations you can actually discover something, which is like more of a breakthrough.”

    Infrared5 is also fond of conducting a sort of conceptual triage at the beginning stages of a project. Try to identify the biggest risk elements within the project and then devise tests to see if those elements become problematic or not. As carpenters say: measure twice, cut once. The Kiwi Katapult Revenge team did this with image processing, checking first to make sure that the gesture camera could connect to Unity, and then writing the code to connect the two. Take on successive chunks, and prioritize by risk.

    Also consider the target form factor early in the planning. For example, tablets lie flat on a table. Kiwi Katapult Revenge cannot operate on a tablet with an external gesture camera because there is no way to mount the gesture camera to the device and have it point at a user. The Lenovo IdeaPad Yoga convertible Ultrabook device, in contrast, has several form factor possibilities and can mount the camera. With a tablet, they might not have even attempted bringing in their Brass Monkey tools.

    Finally, Chris urges developers to collaborate with their peers, much as they did with other Intel contestants. By sharing code and ideas, all teams emerged more enriched. In the process perceptual computing not only grew in its capabilities but it also may have nudged that much closer toward having industry-standard commands and code sets. Had the contestants remained isolated, this development seems much less likely.

    Resources


    Perhaps not surprisingly given the newness of the perceptual computing field,Infrared5 didn’t make much use of outside resources during the creation of Kiwi Katapult Revenge. They did make use of the Unity sample code Intel provided. Intel technicians also provided constructive feedback to the Infrared5 designer, helping to massage and smooth the app over inevitable rough spots. Infrared5 engineers consulted books on OpenCV and made use of multiple open source libraries. Again, collaboration with other Ultimate Coder Challenge: Going Perceptual Challenge teams was invaluable.

    Looking Ahead


    Infrared5 is working on adding more achievements, more enemies, and more non-player characters to Kiwi Katapult Revenge. It complements Intel “for not building out every feature” in its SDK because now the company has real-world feedback from developers and early adopters to help optimize the toolset for the features that matter most. This can only help accelerate perceptual computing’s progress and ensure a better software experience for everyone.

     

    Intel does not make any representations or warranties whatsoever regarding quality, reliability, functionality, or compatibility of third-party vendors and their devices. For optimization information, see http://software.intel.com/en-us/articles/optimization-notice/. All products, dates, and plans are based on current expectations and subject to change without notice. Intel, the Intel logo, Intel AppUp, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. Copyright © 2013. Intel Corporation. All rights reserved

  • Ultimate Code Challenge
  • face tracking
  • ultrabook
  • Developers
  • Intel AppUp® Developers
  • Microsoft Windows* 8
  • Windows*
  • C/C++
  • Unity
  • Intel® Perceptual Computing SDK
  • Perceptual Computing
  • Microsoft Windows* 8 Desktop
  • Sensors
  • Laptop
  • URL
  • Code-Monkeys Case Study

    $
    0
    0

    Downloads


    Code Monkeys Case Study [PDF 501KB]

    By Edward J. Correia

    Perceptual Computing For the First Time


    When Code-Monkeys entered the Ultimate Coder Challenge: Going Perceptual, the 12-year-old software development company had no experience with perceptual computing. But that was okay; part of the idea behind the seven-week contest was to educate and to encourage innovation around the use of non-touch gestures—mainly of the hands and face—as inputs for computer programing and control.

    With the Intel® Perceptual Computing SDK in beta, Intel engaged with early innovators and thought leaders to encourage collaboration and build a community and knowledge base. Code-Monkeys took Stargate Gunship*, an orbital shooter game built a few months earlier, and modified it to accept movements of the head and hands, as well as voice commands, as inputs. "It felt like the perfect opportunity to expand into perceptual computing," said Chris Skaggs, who founded Code-Monkeys in 2000. "The controls already relied on a fairly basic touch interface using a single finger, so mapping that to head and/or hand tracking felt like a doable project."

    Skaggs called on Gavin Nichols, main programmer of the original game that would eventually become Stargate Gunship. "My main priority in this contest was to redesign the game itself for the camera," said Nichols. "This meant writing code so that our GUI system, which was meant for touches and clicks, also worked with perceptual input."

    The Control Schema Issue


    Stargate Gunship’s pre-existing control schema, which used touch input to move a target reticule to an area to be fired upon, proved unstable for perceptual input. The original control scheme was based on standard orbital-shooter paradigms, and the weapon always fired toward the center of the screen, explained Nichols. Changing where you aimed meant changing where you looked. "The first time we started playing with the camera, it was shaky," said Nichols. They knew that even with marginal shaking, a constantly moving camera was not going to be conducive to an enjoyable playing experience. They needed a way to decouple camera angle and firing angle, which would create the ability to fire anywhere on the screen at any time without having to change camera angles. But sampling the data frame by frame created jitters. The reality was that the camera was too sensitive to motion even with the steadiest of hands. And the problem worsened as the player's arms got tired.

    John McGlothlan, who has been programming since age 7, was called in to tackle the issue. He created a buffering system that averages the target hand's position and gesture over a certain number of frames. This resulted in a smoothing of motion as the program displayed a moving average of the hand's position. If the average reflects no movement, the reticule isn't moved. Using McGlothlan's nickname, the team dubbed the algorithm “Lennie's Custom Averages”; sampling takes place at 60 frames per second (FPS) with no perceptible lag.


    Figure 1:Developers discovered that players have an easier time controlling the action using head and hand gestures if their perceptual inputs are displayed on-screen during play.

    Perceptual Input Class

    Lennie’s Custom Averages was a major breakthrough. This led to the development of the Perceptual Input Class—a full input class to process perceptual input—and ultimately to an event- and variable-driven input class that could be accessed at any time and in any script. "This was done at first to match how Unity input interaction was done," said McGlothlan, who's responsible for most of the program debugging. But the ultimate benefit was the ability to contain all perceptual coding in a single file that when inserted into other programs would make them "perceptual-ready."

    Modular code development is a huge benefit for efficient multi-platform porting. "I can translate all of my different inputs into game-related terminology," said Nichols, standardizing terms such as "jump,""shoot," or "look along this vector." This lets all of a game's code work from the same set of inputs. "My camera is looking for a vector. Whether or not I'm firing is a single boolean." Then all of the different control schemes feed into this singular place—a uniform data type—and the program can deal with each input system's unique strengths and weaknesses in its own space without messing up another system's code.

    To be portable, the new input class had to be verbose, giving more data than needed. This meant that if a different control schema was to be used, only simple changes were necessary to complete rather than rewriting major portions of the Perceptual Input Class. The code below shows half of the computed public variables used across the application. They can be accessed in real time and represent only those for face tracking; there are about 30 accessible variables in all.

    public Vector3 facePosition;
    public Vector3 facePositionTemp;
    public Vector3 leftEyePosition;
    public Vector3 rightEyePosition;
    public Vector3 deltaEyePosition;
    public Vector3 leftMouthPosition;
    public Vector3 rightMouthPosition;
    public Vector3 deltaMouthPosition;
    
    public Vector3 faceDelta;
    public Vector3 leftEyeDelta;
    public Vector3 rightEyeDelta;
    public Vector3 deltaEyeDelta;
    public Vector3 leftMouthDelta;
    public Vector3 rightMouthDelta;
    public Vector3 deltaMouthDelta;
    

    It wasn't long before Code-Monkeys realized that there was more to perceptual interface building than changing the inputs; the visual feedback also needed to change. "Gavin [Nichols] had such a hard time representing where his head was," said McGlothlan. "Through many conversations, Gavin decided to add the code to see the current inputs in the GUI. With that, we knew we made a huge stride."


    Figure 2:A debug view designed to reveal the results of a raycast and the expected orientation of the head. The information this provided for the developers proved to be even more valuable to the player.

    According to Nichols, his aha moment came while he was taking a mental break by experimenting with some shaders on the characters and he saw a ghost head. "I made some changes to the models and suddenly I had icons that showed what they were meant to represent." Nichols still had much more to do, including development of new control paradigms and reticules, and tweaking practically everything else related to gameplay. Some of his greatest challenges were in making changes to the GUI feedback icons and getting the reticule to stand out more by adding rotating crosshairs to the edge, and adding some functionality to the GUI to allow hand gestures to activate buttons. Here's how he dealt with some of these things programmatically:

    public class ReticlePlacement : MonoBehaviour {
    
    
    
    
    	public CameraStuff myCameraStuff;
    	public AimGuns myAimGuns;
    	public Gun myGun;
    	public LayerMask myLayerMask;
    	public MeshRenderer myRenderer;
    	public bool showing;
    	public float damping;
    	
    	private Ray ray;
    	private RaycastHit hit;
    	
    	void OnEnable(){
    		Fire.changeWeapon += changeWeapon;
    		if(myCameraStuff == null) myCameraStuff = (CameraStuff)GameObject.FindObjectOfType(typeof(CameraStuff));
    		if(myAimGuns == null) myAimGuns = (AimGuns)GameObject.FindObjectOfType(typeof(AimGuns));
    	}
    	
    	void OnDisable(){
    		Fire.changeWeapon -= changeWeapon;
    	}
    	
    	public void changeWeapon(Gun newWeapon){
    		myGun = newWeapon;
    		transform.localScale = Vector3.one * Mathf.Max(myGun.damageRadius, 5f);
    	}
    	
    	public void LateUpdate(){
    		if(Application.platform == RuntimePlatform.IPhonePlayer){
    			if(Input.touchCount == 2) showing = true;
    			else showing = false;
    		}
    		if(showing && myRenderer != null){
    			myRenderer.enabled = true;
    			if(myCameraStuff != null) ray = myCameraStuff.firingVector;//new Ray(myGun.transform.TransformPoint(Vector3.zero),  myGun.transform.TransformDirection(Vector3.forward));
    			if(Physics.Raycast(ray, out hit, 10000f, myLayerMask)){
    				transform.position = Vector3.Lerp(transform.position, hit.point, Time.deltaTime * damping);
    				myAimGuns.transform.LookAt(hit.point);
    			}else{
    				Vector3 oldposition = transform.position;
    				transform.position = ray.origin;
    				transform.LookAt(ray.origin + ray.direction);
    				transform.Translate(Vector3.forward * 150f);
    				if(myAimGuns != null) myAimGuns.transform.LookAt(transform.position);
    				transform.position = Vector3.Lerp(oldposition, transform.position, Time.deltaTime * damping);
    				//myRenderer.enabled = false;
    			}
    		}else if(myRenderer != null){
    			myRenderer.enabled = false;
    		}
    	}
    }
    
    
    
    
    public class PerceptualStateVisualizer : MonoBehaviour {
    	
    	public SwitchSprite spriteSwitcher;
    	public SpriteColorCycle myCycler;
    	
    	public void Update(){
    		Vector3 outputPosition = Vector3.zero;
    		string newSprite = "";
    		if(PerceptualInput.instance.collectingData){
    			myCycler.Cycling = false;
    			if(PerceptualInput.instance.currentGesture == PerceptualInput.gesture.Closed){
    				newSprite = "ClosedHand";
    			}else if(PerceptualInput.instance.currentGesture == PerceptualInput.gesture.Open){
    				newSprite = "OpenHand";
    			}else if(PerceptualInput.instance.currentGesture == PerceptualInput.gesture.Peace){
    				newSprite = "HandPeace";
    			}else if(PerceptualInput.instance.currentGesture == PerceptualInput.gesture.Missing){
    				newSprite = "HandPeace";
    				myCycler.Cycling = true;
    			}else if(PerceptualInput.instance.currentGesture == PerceptualInput.gesture.Unrecognized){
    				newSprite = "OpenHand";
    			}
    		}
    		if(spriteSwitcher != null){
    			if(!spriteSwitcher.CompareSpriteName(newSprite)){
    				spriteSwitcher.SwitchTo(newSprite);
    			}
    		}
    		
    	}
    	
    }
    
    
    
    
    public class PerceptualGuiClick : MonoBehaviour {
    	
    	public Camera myGUICamera;
    	public delegate void broadcastClick(Vector3 clickPosition);
    	public static event broadcastClick alternateClick;
    	private bool lastFrameOpen;
    	
    	public void Update(){
    		//if(PerceptualInput.instance.currentGesture == PerceptualInput.gesture.Closed || PerceptualInput.instance.currentGesture == PerceptualInput.gesture.Open){
    			if(lastFrameOpen && PerceptualInput.instance.currentGesture == PerceptualInput.gesture.Closed){
    				if(alternateClick != null) alternateClick(myGUICamera.WorldToScreenPoint(this.transform.position));
    			}
    			lastFrameOpen = !(PerceptualInput.instance.currentGesture == PerceptualInput.gesture.Closed);
    		//}
    	}
    }
    
    
    
    
    public class AlternativeGuiClicks : MonoBehaviour {
    	
    	public Camera GUICamera;
    	public LayerMask GuiLayerMask;
    	
    	public void OnEnable(){
    		PerceptualGuiClick.alternateClick += Click;
    	}
    	
    	public void OnDisable(){
    		PerceptualGuiClick.alternateClick -= Click;
    	}
    	
    	public void Start(){
    		if(GUICamera == null){
    			GUICamera = this.GetComponent<Camera>();
    		}
    	}
    	
    	public void Click(Vector3 screenPoint){
    		Ray myRay = new Ray();
    		if(GUICamera != null) myRay = GUICamera.ScreenPointToRay(screenPoint);
    		RaycastHit hit = new RaycastHit();
    		if(Physics.Raycast(myRay, out hit, 100f, GuiLayerMask)){
    			//Debug.Log(hit.transform.name);
    			hit.transform.SendMessage("OnClick", SendMessageOptions.DontRequireReceiver);
    		}
    	}
    }
    
    

     

    From a UI perspective, one of the key takeaways for the team was the importance of visual feedback about what the user's movements were doing within the program. When testing the app on first-time users, people initially had a difficult time making the mental connection that their hand was moving the on-screen target. "The core conversation while people were playing was that they weren’t quite sure what was happening and they found that frustrating," said Skaggs. That kind of user feedback led directly to a hand-and-head calibration tool at the start of the game and the ghosted head and hand images throughout gameplay. "Although I think we’ll make that a temporary thing that will fade out and only reassert itself if we lose track of the user," Skaggs said.


    Figure 3:When starting a game, the player is presented with a short series of tasks and instructions that allow the camera to calibrate its input parameters to the player. But equally important is the player’s experience of calibrating his or her physical motions with the ghosted image on the screen.

    The Bandwidth Problem


    In the real world, military operations for decades have been using lasers to track eye movement for weapons sighting in the heads-up displays of fighter jets. Computers controlled by hands waving in the air have been part of sci-fi pop culture for years. One film that often springs to mind along with perceptual computing is “Minority Report,” in which police use perceptually controlled computers to solve crimes before they happen. But first-time users on the Code-Monkeys team met with a few surprises. First were the physical limitations such as arm fatigue and the limited head movement when playing on a small screen. Then came the technical restrictions such as the raw computing power needed to drive perceptual systems along with the graphics that surround the UI.

    The amount of data collected by the camera is enormous and can require a large percentage of the CPU to process and identify as game inputs. However, only about 5 percent of that data was relevant to the controls of Stargate Gunship; the rest was wasted data. The next challenge was to parse the data and figure out how to determine which part of the stream to keep, and to turn the stream on and off as needed and otherwise filter it. It was that knowledge, said Skaggs, which became one of the team's key insights for saving CPU cycles—to identify the player's hand and keep track of where it is. "For example," he said, "a typical person’s hand [usually] winds up in the lower-right corner of the screen. As soon as we can identify that, we just ignore the rest of the screen." They ended up ignoring roughly three-quarters of the pixel data coming in from the two cameras, which he said equates to a savings of about 300 percent. "If it’s not where the hand is we don’t care. That is a key insight.

    "Further challenges arose when decoding what the user's hand was doing. Because of the way the camera interprets bones in the hand, certain gestures are more prone to misidentification under a broad range of circumstances. For example, one of the gestures they tried to use for calibration was a simple thumbs-up. But for some reason it was often misunderstood. They brainstormed ideas for gestures they thought might be easier to recognize, settling on the peace sign, which delivered a better and much faster success rate.


    Figure 4: Pass or Fail - A key to hand gestures that Code-Monkeys developers found to work best and worst.

    McGlothlan's biggest challenge was trimming and optimizing the data stream, and the epiphany came in removing the m_Texture.Apply(); function call. “With the SDK as a starting point we naturally assumed that every line was important. But when we hit a wall in squeezing out performance we started looking deeper and trimming fat.” According to McGlothlan, that single line of code caused the input class to take 90 percent of the processor's time. "In a game, any input class has about 3 percent available to it, maybe 10 percent if the graphics are small," he said. But 90 percent is laughable. By removing that single line of code, McGlothlan said CPU usage for the class dropped from 90 percent to just 3 percent. The app was then able to run at 60 FPS versus between 5 and 15 FPS. The test environment was a high-powered laptop with an Intel® Core™ i7 processor and dedicated GPU.


    Figure 5:A selection of accepted gestures, their graphical cues, and the code that identifies them.

    Development Process


    Stargate Gunship was developed using the Unity game development system from San Francisco-based Unity Technologies. Unity provides a development ecosystem complete with a 3D rendering engine, terrain and object editors, thousands of pre-made objects, and tools for debugging and process workflows. The environment targets 10 major computing, mobile, and console platforms.

    A key tool in Unity's debugging arsenal is the memory profiler. "We used this to track any sort of performance hang-ups and see where [the program] was spending most of the time," said Nichols. "Unity's built-in debugger has a wonderful deep analysis tool showing which lines in [which] classes are using the most processing time. The company creates a development environment that makes processing outside inputs so much easier," he said.

    In the context of processing perceptual inputs, Unity also proved adept. The environment permitted the creation of control objects with constraints and filters on the object itself. This gave Skaggs and his team a method of throttling the huge streams of data flowing without having to build the logic themselves. "Tools inside Unity helped us get what we wanted at a relatively cheap cost in a processor sense and in terms of man hours," said Skaggs. The Unity user forum also was a good source of developer insight and was visited most frequently after brainstorming new concepts or approaches.

    Voice Processing Fail

    In addition to the application's visual inputs, Stargate Gunship also accepts spoken commands. As the game's main sound designer, John Bergquist, a confirmed Macgyver nut, was ready to introduce the game to a wider audience. And what better set of beta testers could there be than the hordes of developers roaming the aisles of the Game Developers Conference 2013(GDC)? But as Skaggs recalls, things didn't go quite as planned. "At GDC, we just never really thought about what happens when you’re on a really busy [show] floor and it’s really noisy."

    “Ambient noise is the killer,” said Skaggs. In a quiet room with limited background noise, most words were recognized with about the same effectiveness. But the app was unable to decipher commands from cacophony or if the player spoke too rapidly, and Skaggs was ready to yank the whole thing out. But cooler heads prevailed and the team began to explore possible reasons for the failure. In the evening following the first day at GDC, John McGlothlan reflected on the day’s experience and proposed a tweak that proved to be brilliant. "John had a trick," recalled Skaggs. "He said, 'Wait a second, if I do a simple change, we’d shave off the big problem of [vocal command] input getting confused.'”

     

    They ultimately discovered that plosives—words that begin with P, B, or other popping consonants—were far more recognizable than words starting with vowels. So they began to cull their verb set to include only words that were easily recognizable—ideally to limit the set to single-syllable plosives. This one change improved recognition not only in crowded settings, but also in quiet ones.


    Figure 6:Unity 3D’s IDE has been a “go to” technology for Code-Monkeys over the last four years and saved dozens of man-hours with its built-in physics and performance tools.

    Conclusion


    Despite about a dozen years building software and five years building games, the developers at Code-Monkeys were in uncharted territory when it came to perceptual computing. When faced with the time pressures of the Ultimate Coder Challenge, the team tapped into a deep pool of resources when it could and learned the rest on-the-fly. Some of the team’s most daunting challenges included coping with enormous amounts of data coming from the perceptual camera, deciphering head and hand gestures from busy backgrounds, fine-tuning voice recognition and audio commands, eliminating jitter when displaying visual feedback of gestures, and making visual feedback more useful to the player. Beneath it all was Intel’s support for grassroots app development through coding initiatives, and the resources and support network it makes available to developers.  

    Resources


    The team relied heavily on the Unity Forum, a community consisting of thousands of developers the world over. The team also tapped extensively into the NGUI forum. The Next-Gen UI for Unity includes an event notification framework developed by Tasharin Entertainment. Code-Monkeys also engaged the help of Nat Iwata, a long-time art director and visual resource for Code-Monkeys and Soma Games. Another key partner and Flex/Flash* expert was Ryan Green, whose current Unity 3D project, “That Dragon, Cancer” will touch your heart.  And of course there's the Intel Perceptual Computing SDK and Intel Perceptual Online Documentation, which above all else were the team's most useful resources. The documentation provided enough insight to allow the team to move forward using their own backgrounds as a guide. The cumulative knowledge that results from the Ultimate Coder Challenge: Going Perceptual is intended to enrich and supplement Intel's own documentation and help improve its SDK.

     

    Intel does not make any representations or warranties whatsoever regarding quality, reliability, functionality, or compatibility of third-party vendors and their devices. For optimization information, see software.Intel.com/en-us/articles/optimization-notice/. All products, dates, and plans are based on current expectations and subject to change without notice. Intel, the Intel logo, and Intel Core are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. Copyright © 2013. Intel Corporation. All rights reserved

     

  • ultimate coder challenge
  • ultrabook
  • Developers
  • Microsoft Windows* 8
  • Perceptual Computing
  • Sensors
  • Laptop
  • Desktop
  • URL

  • PERCEPTUAL COMPUTING: Depth Data Techniques

    $
    0
    0

    Downloads


    PERCEPTUAL COMPUTING: Depth Data Techniques [PDF 479KB]

    1. Introduction


    Many developers have had an opportunity to explore the power of perceptual computing thanks to the Intel® Perceptual Computing SDK, and have created applications that track faces, gestures, and voice commands. Some developers go beyond the confines of the SDK and use the raw camera depth data to create amazing new techniques. One such developer (me) would like to share some of these techniques and provide a springboard for creating innovative solutions of your own.


    Figure 1:  16-bitDepth Data as seen from a Gesture Camera

    This article provides beginners with a basic knowledge of how to extract the raw depth data and interpret it to produce application control systems. It also suggests a number of more advanced concepts for you to continue your research.

    I’m assuming that readers are familiar with theCreative* Interactive Gesture camera and the Intel® Perceptual SDK. Although the code samples are given in C++, the concepts explained are applicable to Unity* and C# developers as well. 

    2. Why Is This Important


    To understand why depth data is important, you must consider the fact that the high level gesture functions in the SDK are derived from this low level raw data. Features such as finger detection and gesture control begin when the SDK reads the depth data to produce its interpretations. Should the SDK run out of features before you run out of ideas, you need the ability to supplement the existing features with functions of your own.

    To that end, having a basic knowledge of app control through the use of depth data will be an invaluable skill to build on.


    Figure 2:  Unsmoothed raw depth data includes many IR artefacts

    As shown in Figure 2, the raw depth data that comes from the camera can be a soup of confusing values when unfiltered, but it also contains interesting metadata for those coders who want to go beyond conventional wisdom. For example, notice how the blobs in the upper right corner of the image contain bright and dark pixels, which might suggest very spiky camera facing objects. In fact, these are glass picture frames scattering the IR signal and creating unreliable readings for the purpose of depth determination. Knowing that such a randomly spiky object could not exist in real depth, your code could determine the ‘material type’ of the object based on these artifacts.

    This is just one example of how the raw depth data can be mined for ‘as yet’ unexplored techniques, and helps to show how much more functionality we can obtain from outside of the SDK framework.

    3. Depth Data Techniques


    Given this relatively new field, there is currently no standard set of techniques that apply directly to controlling an application using depth data. You will find a scattering of white papers on data analysis that may apply and fragments of code available from the early Perceptual pioneers, but nothing you could point to as a definitive tome.

    Therefore, the following techniques should be viewed as unorthodox attempts at describing possible ways to obtain specific information from the depth data and should not be viewed as definitive solutions. It is hoped that these ideas will spark your own efforts, customized to the requirements of your particular project.

    Below is a summary of the techniques we will be looking at in detail:
    (a) Basic depth rendering
    (b) Filtering the data yourself
    (c) Edge detection
    (d) Body mass tracker

    Basic Depth Rendering

    The simplest technique to start with is also the most essential, which is to read the depth data and represent it visually on the screen. The importance of reading the data is a given, but it’s also vital to present a visual of what you are reading so that subsequent technique coding can be debugged and optimized.

    The easiest way to start is to run the binary example from the Intel Perceptual Computing SDK:

    \PCSDK\bin\win32\depth_smoothing.exe


    Figure 3:  Screenshot of the Depth Smoothing example from the SDK

    \PCSDK\sample\depth_smoothing\ depth_smoothing.sln

    The project that accompanies this example is quite revealing in that there is very little code to distract and confuse you. It’s so small in fact that we can include it in this article:

    int wmain(int argc, WCHAR* argv[]) {
        UtilRender   rraw(L"Raw Depth Stream");
        UtilPipeline praw;
        praw.QueryCapture()->SetFilter(PXCCapture::Device::PROPERTY_DEPTH_SMOOTHING,false);
        praw.EnableImage(PXCImage::COLOR_FORMAT_DEPTH);
        if (!praw.Init()) {
            wprintf_s(L"Failed to initialize the pipeline with a depth stream inputn");
            return 3;
        }
    
        UtilRender rflt(L"Filtered Depth Stream");
        UtilPipeline pflt;
        pflt.QueryCapture()->SetFilter(PXCCapture::Device::PROPERTY_DEPTH_SMOOTHING,true);
        pflt.EnableImage(PXCImage::COLOR_FORMAT_DEPTH);
        pflt.Init();
    
        for (bool br=true,bf=true;br || bf;Sleep(5)) {
            if (br) if (praw.AcquireFrame(!bf)) {
                if (!rraw.RenderFrame(praw.QueryImage(PXCImage::IMAGE_TYPE_DEPTH))) br=false;
                praw.ReleaseFrame();
            }
            if (bf) if (pflt.AcquireFrame(!br))
            {
                PXCImage* depthimage = pflt.QueryImage(PXCImage::IMAGE_TYPE_DEPTH);
                if (!rflt.RenderFrame(depthimage)) bf=false;
                pflt.ReleaseFrame();
            }
        }
        return 0;
    }
    

    Thanks to the SDK, what could have been many pages of complicated device acquisition and GUI rendering code has been reduced to a few lines. The SDK documentation does an excellent job of explaining each line and there is no need to repeat it here, except to note that in this example the depth data is being given directly to the renderer with no intermediate layer where the data can be read or manipulated. A better example to understand how to do this is this one:

    \PCSDK\bin\win32\camera_uvmap.exe
    \PCSDK\sample\camera_uvmap\ camera_uvmap.sln

    Familiarizing yourself with these two examples, with the help of the SDK documentation, will give you a working knowledge of initializing and syncing with the camera device, reading and releasing the depth data image, and understanding the different channels of data available to you.

    Filtering the Data Yourself

    Not to be confused with depth smoothing performed by the SDK/driver, filtering in this sense would be removing depth layers your application is not interested in. As an example, imagine you are sitting at your desk in a busy office with colleagues walking back and forth in the background. You do not wish your application to respond to these intrusions, so you need a way to block them out. Alternatively, you may only want to focus on the middle depth, excluding any objects in the foreground such as desktop microphones and stray hand movements.

    The technique involves only a single pass, reading the smoothed depth data and writing out to a new depth data image for eventual rendering or analysis. First, let’s look at code taken directly from the CAMERA_UVMAP example:

    int cwidth2=dcolor.pitches[0]/sizeof(pxcU32); // aligned color width
    for (int y=0;y<(int)240;y++) 
    {
     for (int x=0;x<(int)320;x++) 
     {
      int xx=(int)(uvmap[(y*dwidth2+x)*2+0]*pcolor.imageInfo.width+0.5f);
      int yy=(int)(uvmap[(y*dwidth2+x)*2+1]*pcolor.imageInfo.height+0.5f);
      if (xx>=0 && xx<(int)pcolor.imageInfo.width)
       if (yy>=0 && yy<(int)pcolor.imageInfo.height)
        ((pxcU32 *)dcolor.planes[0])[yy*cwidth2+xx]=0xFFFFFFFF;
     }
    }
    

    As you can see, we have an image ‘dcolor’ for the picture image coming from the camera. Consider that the depth region is only 320x240 compared to the camera picture of 640x480, so the UVMAP reference array translates depth data coordinates to camera picture data coordinates.

    The key element to note here is the nested loop that will iterate through every pixel in the 320x240 region and perform a few lines of code. As you can see, there is no depth data reading in the above code, only camera picture image writing via dcolor.planes[0]. Running the above code would produce a final visual render that looks something like this:


    Figure 4:  Each white dot in this picture denotes a mapped depth data coordinate

    Modifying the example slightly, we can read the depth value at each pixel and decide whether we want to render out the respective camera picture pixel. The problem of course is that for every depth value that has a corresponding camera picture pixel, many more picture pixels are unrepresented. This means we would still see lots of unaffected picture pixels for the purpose of our demonstration. To resolve this, you might suppose we could reverse the nested loop logic to traverse the 640x480 camera picture image and obtain depth values at the respective coordinate.

    Alas, there is no inverse UVMAP reference provided by the current SDK/driver, and so we are left to concoct a little fudge. In the code below, the 640x480 region of the camera picture is traversed, but the depth value coordinate is arrived at by creating an artificial UVMAP array that contains the inverse of the original UV references, so instead of depth data coordinates converted to camera picture image references, we have picture coordinates converted to depth data coordinates.

    Naturally, there will be gaps in the data, but we can fill those by copying the depth coordinates from a neighbor. Here is some code that creates the reverse UVMAP reference data. It’s not a perfect reference set, but sufficient to demonstrate how we can manipulate the raw data to our own ends:

    // 1/2 : fill picture UVMAP with known depth coordinates
    if ( g_biguvmap==NULL )
    {
     g_biguvmap = new int[640*481*2];
    }
    memset( g_biguvmap, 0, sizeof(int)*640*481*2 );
    for (int y=0;y<240;y++) 
    {
     for (int x=0;x<320;x++) 
     {
      int dx=(int)(uvmap[(y*320+x)*2+0]*pcolor.imageInfo.width+0.5f);
      int dy=(int)(uvmap[(y*320+x)*2+1]*pcolor.imageInfo.height+0.5f);
      g_biguvmap[((dy*640+dx)*2)+0] = x;
      g_biguvmap[((dy*640+dx)*2)+1] = y;
     }
    }
    
    // 2/2 : populate gaps in picture UVMAP horizontal and verticle
    int storex=0, storey=0, storecount=5;
    for (int y=0;y<480;y++) 
    {
     for (int x=0;x<640;x++) 
     {
      int depthx = g_biguvmap[((y*640+x)*2)+0];
      int depthy = g_biguvmap[((y*640+x)*2)+1];
      if ( depthx!=0 || depthy!=0 )
      {
       storex = depthx;
       storey = depthy;
       storecount = 5;
      }
      else
      {
       if ( storecount > 0 )
       {
        g_biguvmap[((y*640+x)*2)+0] = storex;
        g_biguvmap[((y*640+x)*2)+1] = storey;
        storecount--;
       }
      }
     }
    }
    for (int x=0;x<640;x++) 
    {
     for (int y=0;y<480;y++) 
     {
      int depthx = g_biguvmap[((y*640+x)*2)+0];
      int depthy = g_biguvmap[((y*640+x)*2)+1];
      if ( depthx!=0 || depthy!=0 )
      {
       storex = depthx;
       storey = depthy;
       storecount = 5;
      }
      else
      {
       if ( storecount > 0 )
       {
        g_biguvmap[((y*640+x)*2)+0] = storex;
        g_biguvmap[((y*640+x)*2)+1] = storey;
        storecount--;
       }
      }
     }
    }
    

    We can now modify the example to use this new reverse UVMAP reference data, and then limit which pixels are written to using the depth value as the qualifier:

    // manipulate picture image
    for (int y=0;y<(int)480;y++) 
    {
     for (int x=0;x<(int)640;x++) 
     {
      int dx=g_biguvmap[(y*640+x)*2+0];
      int dy=g_biguvmap[(y*640+x)*2+1];
      pxcU16 depthvalue = ((pxcU16*)ddepth.planes[0])[dy*320+dx];
      if ( depthvalue>65535/5 ) 
       ((pxcU32 *)dcolor.planes[0])[y*640+x]=depthvalue;
     }
    }
    
    

    When the modified example is run, we can see that the more distant pixels are colored while the depth values that do not meet the condition we added are left unaffected, allowing the original camera picture image to show through.


    Figure 5:  All pixels outside the depth data threshold are colored green

    As it stands, this technique could act as a crude green screen chromo key effect, or separate the color data for further analysis. Either way, it demonstrates how a few extra lines of code can pull out specific information.

    Edge Detection

    Thanks to many decades of research into 2D graphics, there is a plethora of papers on edge detection of static images, used widely in art packages and image processing tools. Edge detection for the purposes of perceptual computing requires that the technique is performance friendly and can be executed with a real-time image stream. An edge detection algorithm that takes 2 seconds and produces perfect contours can be of no use in a real-time application. You need a system that can find an edge within a single pass and feed the required data directly to your application.

    There are numerous types of edges your application may want to detect, from locating where the top of the head is all the way through to defining the outline of any shape in front of the camera. Here is a simple method to determine the location of the head in real-time. The technique makes use of edge detection to determine the extent of certain features represented in the depth data.


    Figure 6:  Depth data with colored dots to show steps in head tracking

    In the illustration above, the depth data has been marked with a number of colored dots. These dots will help explain the techniques used to detect the position of the head at any point in time. More advanced tracking techniques can be employed for more accurate solutions, or if you require multiple head tracking. In this case, our objective is fast zero-history, real-time head detection.

    Our technique begins by identifying the depth value closest to the camera, which in the above case will be the nose. A ‘peace gesture’ has been added to the shot to illustrate the need to employ the previous technique of clipping any foreground objects that would interfere with the head tracking code. Once the nearest point has been found, the nose, we mark this coordinate for the next step.

    Scanning out from the nose position, we march through the depth data in a left to right direction until we detect a sharp change in the depth value. This lets us know that the edge of the object we are traversing has ended. At this stage, be aware that IR interference could make this edge quite erratic, but the good news for this technique is that any sharp increase in depth value means we’re either very near or on the edge of interest. We then record these coordinates, indicated as green dots in Figure 6 and proceed to the third step.

    Using the direction vector between the red and green dots, we can project out half the distance between the center of the object and the edge to arrive at the two blue dots as marked. From this coordinate, we can scan downwards until a new edge is detected. The definition of a head, for the purposes of this technique, is that it must sit on shoulders. If the depth value drops suddenly indicating a near object (i.e., a shoulder), we record the coordinate and confirm one side of the scan. When both left and right shoulders have been located, the technique reports that the head has been found. Some conditions are placed to ensure other types of objects will not result in head detection such as the shoulders being too far down the image. The technique is also dependent on the user not getting too close to the camera where the shoulders might lie outside of the depth camera view.

    Once the existence of the head has been determined, we can average the positions of the two green dot markers to arrive at the center position of the head. Optionally, you can traverse the depth data upwards to find the top of the head, marked as a purple dot. The downside to this approach, however, is that hair does not reflect IR very well and produces wild artifacts in the depth data. A better approach to arriving at the vertical coordinate for the head center is to take the average Y coordinate of the two sets of blue dot markers.

    The code to traverse depth data as described above is relatively straight forward and very performance friendly. You can either read from the depth data directly or copy the contents to a prepared filtered buffer in a format of your choice.

    // traverse from center to left edge
    int iX=160, iY=120, found=0;
    while ( found==0 )
    {
      iX--;
      pxcU16 depthvalue = ((pxcU16*)ddepth.planes[0])[iY*320+iX];
      if ( depthvalue>65535/5 ) found=1;
      if ( iX<1 ) found=-1;
    }
    

    Naturally, your starting position would be at a coordinate detected as near the camera, but the code shown above would return the left edge depth coordinate of the object of interest in the variable iX. Similar loops would provide the other marker coordinates and from those the averaged head center position can be calculated.

    The above technique is a very performance friendly approach, but sacrifices accuracy and lacks an initial object validation step. For example, the technique can be fooled into thinking a hand is a head if positioned directly in front of the camera. These discoveries will become commonplace when developing depth data techniques, and resolving them will improve your final algorithm to the point where it will become real-world capable.

    Body mass tracker

    One final technique is included to demonstrate true out-of-the-box thinking when it comes to digesting depth data and excreting interesting information. It is also a very simple and elegant technique too.

    By using the depth value as a weight against cumulatively adding together the coordinates of each depth pixel, you can arrive at a single coordinate that indicates generally at which side of the camera the user is located. That is, when the user leans to the left, your application can detect this and provide a suitable coordinate to track them. When they lean to the right, the application will continue to follow them. When the user bows forward, this too can be tracked. Given that the sample taken is absolute, individual details like hand movements, background objects, and other distractions are absorbed into a ‘whole view average.’

    The code is divided into two simple steps. The first will average all the value depth pixel coordinates to produce a single coordinate, and the second will draw the dot to the camera picture image render so we can see if the technique works. When run, you will see the dot center itself around the activity of the depth data.

    // find body mass center
    int iAvX = 0;
    int iAvY = 0;
    int iAvCount = 0;
    for (int y=0;y<(int)480;y++) 
    {
     for (int x=0;x<(int)640;x++) 
     {
      int dx=g_biguvmap[(y*640+x)*2+0];
      int dy=g_biguvmap[(y*640+x)*2+1];
      pxcU16 depthvalue = ((pxcU16*)ddepth.planes[0])[dy*320+dx];
      if ( depthvalue<65535/5 ) 
      {
       iAvX = iAvX + x;
       iAvY = iAvY + y;
       iAvCount++;
      }
     }
    }
    iAvX = iAvX / iAvCount;
    iAvY = iAvY / iAvCount;
    
    // draw body mass dot
    for ( int drx=-8; drx<=8; drx++ )
     for ( int dry=-8; dry<=8; dry++ )
      ((pxcU32*)dcolor.planes[0])[(iAvY+dry)*640+(iAvX+drx)]=0xFFFFFFFF;
    

    In Figure 7 below, notice the white dot has been rendered to represent the body mass coordinate. As the user leans right, the dot respects the general distribution by smoothly floating right, when he leans left, the dot smoothly floats to the left, all in real-time.


    Figure 7:The white dot represents the average position of all relevant depth pixels

    These are just some of the techniques you can try yourself, as a starting point to greater things. The basic principle is straightforward and the code relatively simple. The real challenge is creating concepts that go beyond what we’ve seen here and re-imagine uses for this data.

    You are encouraged to experiment with the two examples mentioned and insert the above techniques into your own projects to see how easy it can be to interpret the data in new ways. It is also recommended that you regularly apply your findings to real-world application control so you stay grounded in what works and what does not.

    4. Tricks and Tips


    Do’s

    • When you are creating a new technique from depth data to control your application in a specific way, perform as much user testing as you can. Place your family and friends in front of your application and see if your application responds as expected, as well as changing your environment such as moving the chair, rotating your Ultrabook™ device, switching off the light, and playing your app at 6AM.
    • Be aware that image resolutions from a camera device can vary, and the depth resolution can be different from the color resolution. For example, the depth data size used in the above techniques is 320x240 and the color is640x480. The techniques used these regions to keep the code simplified. In real-world scenarios, the SDK can detect numerous camera types with different resolutions for each stream. Always detect these dimensions and feed them directly to your techniques.

    Don’ts

    • Until depth data resolution matches and exceeds camera color resolution, the reverse UVMAP reference technique noted above cannot be relied on to produce 100% accurate depth readings. With this in mind, avoid applications that require a perfect mapping between color and depth streams.
    • Avoid multi-pass algorithms whenever possible and use solutions that traverse the depth data in a single nested loop. Even though 320x240 may not seem a significant resolution to traverse, it only takes a few math divisions within your loop code to impact your final application frame rate. If a technique requires multiple passes, check to see if you can achieve the same result in a single pass or store the results from the previous pass to use in the next cycle.
    • Do not assume the field of view (FOV) for the color camera is the same as the depth camera. A common mistake is to assume the FOV angles are identical and simply divide the color image coordinate by two to get the depth coordinate. This will not work and result in disparity between your color and depth reference points.
    • Avoid streaming full color, depth, and audio from the Creative Gesture Camera at full frame rate when possible, as this consumes a huge amount of bandwidth that impacts overall application performance. An example is detecting gestures while detecting voice control commands and rendering the camera stream to the screen at the same time. You may find voice recognition fails in this scenario. If possible, when voice recognition is required, deactivate one of the camera image streams to recover bandwidth.
    • Do not assume the depth data values are accurate. By their very nature they rely on IR signal bounces to estimate depth, and some material surfaces and environmental agents can affect this reading. Your techniques should account for an element of variance in the data returned.

    5. Advanced Concepts


    The techniques discussed here are simplified forms of the code you will ultimately implement in your own applications. They provide a general overview and starting point for handling depth data.

    There are a significant number of advanced techniques that can be applied to the same data, some of which are suggested below:
    (a) Produce an IK skeleton from head and upper arms
    (b) Sculpt more accurate point data from a constant depth data stream
    (c) Gaze and eye detection using a combination of depth and color data
    (d) Detect the mood of the user by associating body, hand, and head movements
    (e) Predict which button Bob is going to press before he actually presses it
    (f) Count the number of times Sue picks up her tea cup and takes a sip

    It is apparent that we have really just scratched the surface of what is possible with the availability of depth data as a permanent feature of computing. As such, the techniques shown here merely hint at the possibilities, and as this sector of the industry matures, we will see some amazing feats of engineering materialize from the very same raw depth data we have right now.

    Since the mouse and pointer were commercialized, we’ve not seen such a fundamentally new input medium as Perceptual Computing. Touch input had been around for two decades before it gained widespread popularity, and it was software that finally made touch technology shine. It is reasonable to suppose that we need a robust, predictable, fast, and intuitive software layer to compliment the present Perceptual Computing hardware. It is also reasonable to expect that these innovations will occur in the field, created by coders who are not just solving a specific application issue but are, in fact, contributing to the arsenal of weaponry Perceptual Computing can eventually draw from. The idea that your PC or Ultrabook can predict what you want before you press or touch anything is the stuff of fiction now, but in a few years’ time it may not only be possible but commonplace in our increasingly connected lives.

    About The Author


    When not writing articles, Lee Bamber is the CEO of The Game Creators (http://www.thegamecreators.com), a British company that specializes in the development and distribution of game creation tools. Established in 1999, the company and surrounding community of game makers are responsible for many popular brands including Dark Basic, FPS Creator, and most recently App Game Kit (AGK).

    The application that inspired this article and the blog that tracked its seven week development can be found here: http://ultimatecoderchallenge.blogspot.co.uk/2013/02/lee-going-perceptual-part-one.html

    Lee also chronicles his daily life as a coder, complete with screen shots and the occasional video here: http://fpscreloaded.blogspot.co.uk

    Intel does not make any representations or warranties whatsoever regarding quality, reliability, functionality, or compatibility of third-party vendors and their devices. For optimization information, see software.Intel.com/en-us/articles/optimization-notice/. All products, dates, and plans are based on current expectations and subject to change without notice. Intel, the Intel logo, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. Copyright © 2013. Intel Corporation. All rights reserved.

     

  • ultimate coder challenge
  • ultrabook
  • camera
  • Developers
  • Microsoft Windows* 8
  • Perceptual Computing
  • Sensors
  • Laptop
  • Desktop
  • URL
  • Location Data Logger Design and Implementation, Part 4: Bing Maps Integration

    $
    0
    0

    This is part 4 of a series of blog posts on the design and implementation of the location-aware Windows Store app "Location Data Logger". Download the source code to Location Data Logger here.

    The Bing* Maps SDK

    One of Location Data Logger's primary features is the map display which show's the device's current location and a so-called breadcrumb trail of its movement. This is accomplished using the Bing Maps SDK for Windows Store apps (and note that this URL does change, so the preceding link may not work forever. Search engines are your friend). Before I dive into the details of gluing the map display to Location Data Logger, however, it's probably worth covering an important ground rule: the Bing Maps SDK has a license agreement associated with it, and to use the map control in your app you must first obtain a developer key and then agree to the terms and conditions of its use.

    Design goals

    The first step in integration is to figure out what you want the control to do, and how it should fit in with the overall app. For Location Data Logger I had the following requirements:

    1. The device's current position should always be displayed on the map. Any time a position update occurs, whether or not we are actively logging the data points, the display should update.
    2. The map should auto-center to track the device's movement.
    3. The user can override auto-centering, as well as turn it back on.
    4. The data points that have been logged should also be displayed on the map as a breadcrumb trail.
    5. The user can turn off the breadcrumb display

    These will require us to set a number of event handlers for the map control, and I'll cover those implementation details in a moment.

    Adding the map control

    I used XAML to add the map control to the page layout. The XAML does require you to import the Bing.Maps namespace, however, and here I have mapped Bing.Maps to the prefix bm:.

    <common:LayoutAwarePage
     x:Name="pageRoot"
     x:Class="Location_Data_Logger.MainPage"
    ...
    xmlns:bm="using:Bing.Maps"

    The map control can now be added thusly:

    <bm:Map Grid.Row="0" x:Name="mapPosition" Credentials="INSERT_YOUR_DEVELOPER KEY_HERE"
     PointerWheelChangedOverride="mapPosition_PointerWheelChanged" PointerMovedOverride="mapPosition_PointerMoved"
     DoubleTappedOverride="mapPosition_DoubleTapped" PointerCanceledOverride="mapPosition_PointerCanceled"
     PointerReleasedOverride="mapPosition_PointerReleased" PointerPressedOverride="mapPosition_PointerPressed"
     ViewChanged="mapPosition_ViewChanged" />
     

    Note the long list of event handlers which I meantioned briefly, above. Also in there is the Credentials attribute, which is where you place yoru developer key. Your key is generated for you when you create an account for yourself and register your app. Each app you create uses its own, unique key, and this key is used to unqieuly identify your app to the Bing servers. This is how Microsoft tracks your app's usage of the service: if your app's usage exceeds the limits set for free accounts then your app will be blocked-- for anyone who uses it, whevere they are-- until the next day. (If this happens frequently you will want to consider purchasing a high-volume license.)

    Displaying the device position

    Display position on the map is pretty straightforward, but it does require setting up some infrastructure. And to do that, I need to frist talk about the structure of the map control, itself. The map object has, among other things, two properties that hold map objects:

    1. The Children property is a MapUIElementCollection object which is used to hold UI elements directly, and MapLayer objects if you choose to create layers of controls. If you create a custom control for placement on the map, as I do in Location Data Logger, you will add it to this collection.
    2. The ShapeLayer property is a MapShapeLayerCollection object which holds MapShapeLayer objects. A MapShapeLayer is where you draw map shapes (polylines, polygons, and multipoints).

    To display the device's position, I created a user control named MapAccuracyCircle. The specifics of this control will be discussed in a future post in this blog series, but for now just accept that this is a custom control, and as a control it gets added to the map's Children collection. Displaying the location on the map means placing the position into an object that the Bing Maps control can understand, and then updating that position as needed.

    The relevant code snippets:

     public sealed partial class MainPage : Location_Data_Logger.Common.LayoutAwarePage
     {
     ...
    MapAccuracyCircle accuracy_circle;
     Location current_location;
    ...
    public MainPage()
     {
    ...
    accuracy_circle = new MapAccuracyCircle(mapPosition);
     accuracy_circle.Visibility = Windows.UI.Xaml.Visibility.Collapsed;
     mapPosition.Children.Add(accuracy_circle);
    

    Note that I start with the Visibility property set to Collapsed. This is because there is no location set yet, so the map control should not be visible. Visibility is set in the update_status() method:

    accuracy_circle.Visibility = (s == Windows.Devices.Geolocation.PositionStatus.Ready) ?
     Windows.UI.Xaml.Visibility.Visible : Windows.UI.Xaml.Visibility.Collapsed;

    The Location class is part of the Bing Maps API and contains the coordinates of a location on the map, and it is updated within the update_position() method:

    MapLayer.SetPosition(accuracy_circle, current_location);

    (Using MapLayer in this manner is how you adjust the position of a control that has been directly added to a map instead of through a map layer.)

    Auto-centering on the device location

    Auto-centering is quite simple. Every time a PositionChange event is received, just recenter the map.

    if ( toggleAutoCenter.IsChecked == true ) mapPosition.SetView(current_location, map_zoom);

    If auto-centering is enabled, via a toggle button, then set the new map view. The map's zoom level is tracked in the variable map_zoom in case it is needed elsewhere in the future (currently, this variable is redundant and not really used).

    Toggling auto-center off and on

    It makes sense to disable auto-centering of the map when the user takes action that implies he or she no longer wants it, such as scrolling the map view, and to turn it back on when they are done. This is accomplished in two ways. The first is with a toggle button below the map called "Autocenter". When it's on, the map display will auto-center and when it's off, it won't. The second is using event handlers on the map control.

     // Map update events
    
    private void mapPosition_PointerPressed(object sender, RoutedEventArgs e)
    {
        map_pointer_pressed = true;
    }
    
    private void mapPosition_PointerReleased(object sender, RoutedEventArgs e)
    {
        map_pointer_pressed = false;
    }
    
    private void mapPosition_PointerMoved(object sender, RoutedEventArgs e)
    {
        if (map_pointer_pressed == true)
        {
           toggleAutoCenter.IsChecked = false;
        }
    }
    
    private void mapPosition_PointerCanceled(object sender, RoutedEventArgs e)
    {
        map_pointer_pressed = false;
    }
    
    private void mapPosition_DoubleTapped(object sender, RoutedEventArgs e)
    {
        toggleAutoCenter.IsChecked = false;
    }
    
    private void mapPosition_ViewChanged(object sender, ViewChangedEventArgs e)
    {
        map_zoom = mapPosition.ZoomLevel;
    }
    
    private void mapPosition_PointerWheelChanged(object sender, PointerRoutedEventArgs e)
    {
        toggleAutoCenter.IsChecked = false;
    }

    There's a lot going on here, and it breaks down like this. Auto-centering is disabled when any of the following occur:

    1. The map view changes. ViewChanged event is generated after the map view change has completed. Meaning, if you scroll the map view, the event is generated when the scrolling stops.
    2. The map is double-clicked. The DoubleTapped event is used to zoom in on the map. This action always changes the map's center point unless the user just happens to double click on the exact pixel in the middle of the map. (This is not too likely, and it is a case that is ignored.)
    3. The pointer is moved while the pointer is pressed. This is the classic "click and drag" motion. Note that the added complex logic here is to prevent auto-centering from being disabled just because a mouse pointer moves across the map. The user must explicitly be doing a click-and-drag. Auto-center is immediately turned off as soon as a drag operation starts. This is to prevent the map from auto-centering while the user is actively manipulating it.
    4. The pointer wheel is changed. This refers to the wheel device on a mouse. This action is mapped to zooming in and out in the map control. See #2.

    The Breadcrumb Trail

    The breadcrumb trail, which shows the positions recorded by the app during logging, is displayed using a shape layer. Two objects are added to the MainPage class

    MapShapeLayer layerBreadcrumb;
     MapPolyline lineBreadcrumb;

    and the following code in the MainPage() constructor gets the map initialized:

    lineBreadcrumb = new MapPolyline();
    layerBreadcrumb = new MapShapeLayer();
    
    lineBreadcrumb.Color = Windows.UI.Colors.CornflowerBlue;
    lineBreadcrumb.Width = 3;
    
    layerBreadcrumb.Shapes.Add(lineBreadcrumb);
    mapPosition.ShapeLayers.Add(layerBreadcrumb);

    The breadcrumb trail is incrementally built inside of the update_position() delegate. When update_position() is called, the DataLogger object also includes the boolean parameter logged. If this value is true, then we add the point to the lineBreadcrumb polyline.

    current_location = new Location(c.Latitude, c.Longitude);
    if (logged)
    {
    
        ...
    
        // Add the point to our breadcrumb trail
        if (lineBreadcrumb != null) lineBreadcrumb.Locations.Add(current_location);
    }
    

    Since the breadcrumb display only shows the currently logged points, it also has to be cleared whenever the user starts a new logging session. This is accomplished in the logger_start() method.

    lineBreadcrumb.Locations.Clear();


    Turning the breadcrumb display on and off

    This is done with a toggle button under the map.

    private void toggleBreadcrumb_Click(object sender, RoutedEventArgs e)
    {
        Boolean onoff = (Boolean)toggleBreadcrumb.IsChecked;
    
        lineBreadcrumb.Visible = onoff;
    }
    

    Note that I'm not just changing the Visible property here: I am also saving the display preference for future sessions by writing it to roamingSettings.

    ← Part 3: The DataLogger ClassPart 5: The Data Grid View →
  • geolocation gps gnss location
  • Icon Image: 

    Seduce Case Study

    $
    0
    0

    By John Gaudiosi

    Downloads


    Seduce Case Study [PDF 831.93KB]

    Introduction


    With Moore’s Law pushing technology forward at a record pace, the challenge for software developers today is to keep up with new input devices such as touch screens, perceptual computing, and eye tracking. One innovative developer created a future-proof system that will allow game makers and app programmers to stay ahead of the curve as new technology is introduced across multiple platforms.

    When Eskil Steenberg of Quel Solaar, an independent development and research studio, was contacted by the organizers of the Intel Ultimate Coder Challenge: Going Perceptual contest, he decided to enter because he loves using the Ultrabook™ device and wanted do something very “techy and opaque.” Steenberg admits that he does not normally enter competitions and didn’t expect to win this one; however, he is glad he entered this particular contest, where his development toolkit, Seduce, won in the technical merit category.

    The Intel Ultimate Coder Challenge: Going Perceptual contest was created to encourage innovation around perceptual computing. By engaging with early innovators and thought leaders, contestants like Steenberg collaboratively shared across the contest, with weekly communication around their experiences with the Intel® Perceptual Computing SDK, their challenges, and their solutions. The collaboration also included features of the Intel Perceptual Computing SDK that each contestant leveraged and new algorithms they developed. Contestants collectively improved the resulting apps across the board.

    The Seduce App: Future Proof


    Steenberg and Intel had discussed collaborating on Seduce before the competition. As a game developer and programmer who currently sits on the OpenGL* Architectural Review Board, Steenberg wanted to create an app that would allow for seamless interactivity in today’s cross-platform technology world.

    “The PC is an incredibly open platform, and you can connect a wide variety of hardware, displays, and input devices,” said Steenberg. “Many people think of it as a desktop device with a mouse and keyboard.” Steenberg’s goal was to build applications that can accommodate the hardware available today as well as future hardware. “When a new input device enters the market, you usually try to redesign the application because there are certain things you didn’t think about. I wanted to fix that problem.”

    Future-proofing technology is something that everyone tries to do. The key to longevity of a program or app in the marketplace is to design it so that programmers and developers won’t have to constantly rewrite code. Steenberg accomplished this goal by focusing on a few products on the horizon.

    “I looked at very large displays built for multiple users, where the user cannot be expected to reach the entire display in a touch interface,” said Steenberg. “This impacts things such as the traditional start button, for example. When an entire wall makes up the display, users cannot touch the start button. I also examined the number of mouse pointers available for multi-touch, something that most interfaces and software today don’t handle well. For example, a mouse doesn’t work on Xbox* and a controller doesn’t work on Microsoft Windows*. In addition, I looked at resolution independence and scalability. Computers assume that an element with a certain amount of pixels will cover a certain portion of the screen. Independence in resolution input and graphics would allow low-resolution displays to work with a high-resolution mouse and super high-resolution displays to work with touch buttons for core precision.”


    Seduce demo pop-up menu

    Steenberg noted the possibilities and limitations of these technologies and coded the software so it stayed within these parameters and limitations. He said complications sometimes arose because he needed to solve problems for things he wouldn’t have done otherwise. He made a future projection, listed its requirements, and then made architectural adjustments. “I created an interface that generically described input devices so users can connect and configure them to any input device,” said Steenberg. “The technology challenge was not the interface, but rather the ability of plug-ins to take over the rendering pipeline, which was much more complicated. In the end, it was a lot of trial and error. The Microsoft interface for OpenGL and context system that exists is robust and good; however, it’s an obscure piece of technology that only a few people on the OpenGL review board, operating system developers, and I care about. There are very few specific uses for creating multiple contexts of OpenGL and making them work together. I searched for people who could give me sample code or a description of how this is done, and I spent a few late nights on the Internet to learn how it works.”

    To display an interface, one shouldn’t assume the pixel resolution in any way corresponds to the size of the interface. A normal image on a PC screen comprises a collection of pixels, but when zoomed in, the images get blocky. Steenberg set out to create an image in a resolution that would remain static, regardless of size. Interfaces today are designed for a general idea of the display resolution utilizing bitmap graphics. He sought a different approach by storing the images in triangles, or curved lines, and using mathematical descriptions. Regardless of how small or large a triangle gets, you can re-compute which pixels are inside or outside the triangle. An interface built from triangles, or polygons, increases the resolution because they’ll become crisper with more detail when zoomed in. This entire interface is scalable. Each individual element in the interface can be changed in the display to accommodate different environments and displays such as using a touch display while wearing gloves.


    Seduce demo icon list

    The interface should take into account the users’ viewing angles within a 3D space so it can easily support stereoscopic displays, head tracking, head-mounted displays, and augmented-reality applications. Steenberg created the Betray library, which opens up the future-proof technology that Seduce unleashes across all platforms, allowing any type of input device—from mouse to keyboard to gestures to touch screens—to seamlessly work on an app with the ease of an API.

    “Betray is a library of all the inputs from the hardware,” said Steenberg. “I wanted to (hide) where the input originated because a pointer can come from many different devices, including a mouse, track pad, touch screen, or a Wii* remote. The device can have any number of buttons. I wanted to enter something very generic.”

    The Betray library doesn’t use much space and is actually two different APIs. One is what the application uses to ask for hardware capabilities, input, buttons, display, and the things required to send out sounds. This encompasses 90 percent of how the app will be used. Betray is currently very small and has very limited features because Steenberg’s goal was to create a secondary API that doesn’t let users read input, but provide input. Betray allows users to write a plug-in for any type of input.

    “You can install an SDK and then write a plug-in that explains to Betray what this hardware does. Betray then passes this information to the normal API for use,” said Steenberg. “The app doesn’t need to understand how the plug-in works; it simply requests the information so that developers can support hardware that they don’t have or understand.”


    Betray relinquish test application

    That means users can buy a plug-in that searches for your buttons or pointers, changes the display or maximizes it in certain ways, implements sounds, or adds an entire 3D sound system.

    “An interface, buttons, sliders, and other input devices are needed to support future hardware,” said Steenberg. “The end stage is an OpenGL standards interface that is fully 3D. Everything is transferrable and scalable, so there’s no pixel size. The interface handles everything from smartphones to a full wall-size display.”

    Challenges Addressed During Development


    Project and Technology

    Steenberg needed to create an intuitive, future-proof library to seamlessly allow input from multiple devices. His first objective was to create the “Imagine” sub-library to handle directory management; he listed here the volumes and directories that unified Unix* and Windows. A traditional Windows PC has multiple volumes of storage that exist virtually in the OS, but it’s not actually a space, it’s where you pick your physical hard drive. Windows requires two separate pieces of information: the available volumes and their content. In a Unix system, one root exists from which everything branches off, and you can place things anywhere in the tree. This causes complications for programmers trying to support these two different machines. In Steenberg’s solution, if the user requests a root directory, he’ll find a fake directory list of the machine’s contents and the app will automatically go into those drives. When you search your machine for a file, you can easily do a recursive search and the app searches all the disks; the code will look the same, whether it’s on a Unix, Linux*, or Mac* system. He also created the application settings API (previously found in Seduce), the dynamic loading of libraries, the sharing of function pointers (needed for the plug-in system), threads and mutexes, and the execution.

    Next, Steenberg implemented an out-of-the-box functionality of the Betray library, using code from older projects. This allowed for opening a window with OpenGL/OpenGL ES Context (with FSAA), mouse and keyboard support, reading cut and paste, opening file requesters, directory search, execute, quad buffer stereoscopic, threads, multi-touch (it supports Windows 7 and forward, but not builds on older systems), full screen, timers, and mouse warp. Steenberg is a pure C programmer, so he sought help from a C++ friend, Pontus Nyman, to write a C wrapper for the functionality of the API. Steenberg also encountered an algorithm that didn’t use a depth map for face recognition; he overcame this issue using his own code.

    The project undertaken during the challenge was to enhance and simplify a platform library with features such as multi-touch, tilt sensors, head tracking, and stereoscopics. Several different types of applications exist, including Adri Om, a data visualization tool, and Dark Side of the Moon, a real-time strategy game currently using the platform library, which will be modified to showcase the possibilities with these technologies. Steenberg identified an interface toolkit with sliders, buttons, menus, and other elements. The toolkit was designed for software development where the application can run on diverse hardware setups such as tablets, TVs, PCs, laptops, convertibles, head-mounted displays, or large scale multi-user walls. It includes wands, head tracking, multi-touch, and stereoscopics.


    Seduce tracks head and hand movements for new interactivity.

    According to Steenberg, working with the Intel Perceptual Computing SDK (in beta at the time of the contest) presented some challenges; however, he was able to use the Creative* Interactive Gesture camera and bypass the SDK and API to get the head tracking at 60 frames per second and the head detection to a quarter pixel (down from five pixels). He wrote four separate algorithms to find, track, and stabilize the head position, down to sub-pixel accuracy. He used no smoothing or prediction to avoid adding any latency. The result was a much more predictable, precise, and stable head tracker.

    “Once you find a head you’re tracking you want to hold onto it,” said Steenberg. “Therefore the ‘head finder’ is only needed in the first frame or if the algorithm concludes that the head is lost. To make this quick, I pick a pixel, read out its depth value, and then check if a pixel’s head size—to the left, right, and above—are all at least 200 mm farther away. I do this on every hundredth pixel. When I get a lot of positives, I choose the one closest to the camera.”

    Mother Nature caused one issue that Steenberg faced. Heads are round, and if you choose a pixel on the side of a head, the height value will be much lower, and the vertical scan will fall off the edge where the head rounds off. Secondly, a head’s edge has hair, which diffuses the infrared (IR) pulse. To resolve this issue, Steenberg sent many vertical and horizontal rays toward the head—accounting for distance—and then averaged them.

    “Now I have a fairly accurate idea of the head’s location; however, I was tracking the edges of the head hair and not the skull surface,” said Steenberg. “I drew a box around the head and did an average position of the pixels inside the box. I weighed the pixels by how far they protruded from the face and the brightness of the IR reflection.”

    While working with this technology, Steenberg delved into his vast programming knowledge to think to the future and create one adaptable app. Steenberg quickly supplemented the API, and he believes that with higher-resolution gesture cameras, head tracking will come of age on new devices. “Head tracking has reached a point where it’s very useful,” said Steenberg. “If we get double or quadruple resolution on these gesture cameras we’ll have very good head tracking, and that’s exciting for a lot of uses. I’m excited about the ability to gather situational awareness data for computers. For example, you would get a 3D scan of an item by photographing it with a gesture camera that has depth capabilities, getting measurements in a 3D model, and then manipulating the model to get immediate feedback.”

    The user interface should disappear and connect users directly to the machine,” said Steenberg. “When you drive a car you don’t think about how to turn left, you just make the turn. If you have doubts that the wheel will turn the car, the interface becomes a disaster; for an interface to disappear you must trust it 100 percent, and if it fails once it becomes worthless. This creates an incredibly high bar for tracking and voice recognition to reach.” But Steenberg believes advances in web cameras will create new opportunities for eye tracking and intelligent communication between PCs and users. He believes the key to the future is thinking beyond present day use.


    Seduce demo “vanishing point”

    “When developing a game or tool, I think about what I want to do now and what I want to do in the future. I want projects that spawn not just a new application but new libraries and new technologies that align with the future. Preferably, my products will leave users with a lot of options and flexibility.”

    For Steenberg, the Intel Ultimate Coder Challenge is about building technology for tomorrow. It's about making something that supports the Intel Perceptual Computing SDK and the new generation of Ultrabook devices, and making everything developers will do in the future to support these and emerging technologies.

    Lessons Learned


    Today’s consumers and business workers are accessing everything—from the Internet to productivity tools—from a variety of connected devices. Developers seek multiple interfaces for each unique device. Steenberg built a future-proof library to complement Seduce, an app that adapts to any type of user interface. Every project of this nature involves challenges. When the Intel Perceptual Computing SDK and built-in camera presented limitations, Steenberg used his experience to think outside the box and create his own solution. He used his programming skills to create an app that is built for today’s evolving landscape and will adapt as camera technology improves and portable computing advances.

    About Eskil


    Eskil Steenberg, an avid programmer, game designer, and participating developer of OpenGL, has worked on experimental programming projects such as Verse, a network protocol for computer graphics and connected micro applications that can synchronize in real-time two different graphical applications. He recently worked on Love, a massive procedural action-adventure research project that focused on what video games should be; and he’s currently developing a new strategy game called Dark Side of the Moon. Steenberg believes it’s important to always make room for more innovation.


    Eskil Steenberg

    Resources


    Eskil Steenberg supplemented his skills with resources such as Component Source. Along with the other contestants, Steenberg also utilized the Intel forums and Intel hardware support.

    Intel does not make any representations or warranties whatsoever regarding quality, reliability, functionality, or compatibility of third-party vendors and their devices. For optimization information, see software.Intel.com/en-us/articles/optimization-notice/. All products, dates, and plans are based on current expectations and subject to change without notice. Intel, the Intel logo, Intel Core, Intel AppUp, Intel Atom, the Intel Inside logo, and Ultrabook, are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. Copyright © 2013. Intel Corporation. All rights reserved.

  • ultimate coder challenge
  • Seduce
  • OpenGL
  • Windows* 8
  • Microsoft Windows* 8
  • OpenGL*
  • Perceptual Computing
  • Game Development
  • Sensors
  • Laptop
  • URL
  • Location Data Logger Design and Implementation, Part 5: The Data Grid View

    $
    0
    0

    This is part 5 of a series of blog posts on the design and implementation of the location-aware Windows Store app "Location Data Logger". Download the source code to Location Data Logger here.

    The Data Grid View

    As useful as the map display is sometimes you just want to be able to look at your raw data, and Location Data Logger makes this possible via its data grid view. Here you can see your logged data points and scroll through their history, in addition to displaying attributes such as speed and heading that just aren't easily visualized on the map. This blog post discusses the design of the table view, and the data bindings used to display the data in it.

    Creating the table

    The first step in coding this data grid view, however, was coming to terms with the fact that the Windows* Runtime does not have a widget for this purpose.

    Windows 8 can display collections of items, of course, but that interface is not oriented like a traditional table: it doesn't have data columns in discreet rows. The grid view is designed to display tiles-- basically, summary content-- which you click or touch to either expand or open as a new page. While it is arguably useful for rolling up multiple pieces of independant, unqiue content into a concise display, it is not at all useful for an actual grid of data like a table or spreadsheet, and it is not appropriate for displaying our logged points in Location Data Logger. At least, not if you want to be able to quickly skim the data and see what changes from track point to track point.

    There are third-party libraries in the wild that actually do provide a traditional table view, but I wanted to limit the number of add-ons and libraries that were required for Location Data Logger in order to simplify the build process as well as the legalities around its distribution, and that meant rolling my own solution using Grid and ListView elements.

    The XAML for the table structure is shown below:

    <Grid x:Name="gridData" Grid.Row="1" Opacity="0" Visibility="Collapsed">
        <Grid.RowDefinitions>
            <RowDefinition Height="Auto"/>
            <RowDefinition Height="*"/>
            <RowDefinition Height="Auto"/>
        </Grid.RowDefinitions>
        <Grid Grid.Row="0" Margin="0,15,0,0" Background="#FF999999">
            <Grid.ColumnDefinitions>
                <ColumnDefinition Width="200"/>
                <ColumnDefinition Width="100"/>
                <ColumnDefinition Width="100"/>
                <ColumnDefinition Width="80"/>
                <ColumnDefinition Width="80"/>
                <ColumnDefinition Width="80"/>
                <ColumnDefinition Width="80"/>
                <ColumnDefinition Width="80"/>
                <ColumnDefinition Width="80"/>
            </Grid.ColumnDefinitions>
            <TextBlock Grid.Column="0" Text="Timestamp (UTC)" Style="{StaticResource ColumnHeader}"/>
            <TextBlock Grid.Column="1" Text="Latitude" Style="{StaticResource ColumnHeader}"/>
            <TextBlock Grid.Column="2" Text="Longitude" Style="{StaticResource ColumnHeader}"/>
            <TextBlock Grid.Column="3" Text="Accuracy" Style="{StaticResource ColumnHeader}"/>
            <TextBlock Grid.Column="4" Text="Altitude" Style="{StaticResource ColumnHeader}"/>
            <TextBlock Grid.Column="5" Text="Altitude Accuracy" TextWrapping="Wrap" Style="{StaticResource ColumnHeader}"/>
            <TextBlock Grid.Column="6" Text="Speed" Style="{StaticResource ColumnHeader}"/>
            <TextBlock Grid.Column="7" Text="Heading" Style="{StaticResource ColumnHeader}"/>
            <TextBlock Grid.Column="8" Text="Precision" Style="{StaticResource ColumnHeader}"/>
        </Grid>
        <Border Grid.Row="1" BorderThickness="1" BorderBrush="#FF999999">
            <ListView x:Name="listPoints" ScrollViewer.VerticalScrollMode="Enabled" ScrollViewer.VerticalScrollBarVisibility="Visible" ItemContainerStyle="{StaticResource DataGridStyle}" ItemsSource="{Binding}" SelectionMode="None" IsItemClickEnabled="False" IsDoubleTapEnabled="False">
                <ListView.ItemTemplate>
                    <DataTemplate>
                        <Grid VerticalAlignment="Top">
                            <Grid.ColumnDefinitions>
                                <ColumnDefinition Width="200"/>
                                <ColumnDefinition Width="100"/>
                                <ColumnDefinition Width="100"/>
                                <ColumnDefinition Width="80"/>
                                <ColumnDefinition Width="80"/>
                                <ColumnDefinition Width="80"/>
                                <ColumnDefinition Width="80"/>
                                <ColumnDefinition Width="80"/>
                                <ColumnDefinition Width="80"/>
                            </Grid.ColumnDefinitions>
                            <TextBlock Grid.Column="0" Text="{Binding Timestamp}" Style="{StaticResource DataCell}"/>
                            <TextBlock Grid.Column="1" Text="{Binding Latitude, Converter={StaticResource fs}, ConverterParameter={0:F6}}" Style="{StaticResource DataCell}"/>
                            <TextBlock Grid.Column="2" Text="{Binding Longitude, Converter={StaticResource fs}, ConverterParameter={0:F6}}" Style="{StaticResource DataCell}"/>
                            <TextBlock Grid.Column="3" Text="{Binding Accuracy}" Style="{StaticResource DataCell}"/>
                            <TextBlock Grid.Column="4" Text="{Binding Altitude}" Style="{StaticResource DataCell}"/>
                            <TextBlock Grid.Column="5" Text="{Binding AltitudeAccuracy}" Style="{StaticResource DataCell}"/>
                            <TextBlock Grid.Column="6" Text="{Binding Speed}" Style="{StaticResource DataCell}"/>
                            <TextBlock Grid.Column="7" Text="{Binding Heading}" Style="{StaticResource DataCell}"/>
                            <TextBlock Grid.Column="8" Text="{Binding Precision}" Style="{StaticResource DataCell}"/>
                        </Grid>
                    </DataTemplate>
                </ListView.ItemTemplate>
            </ListView>
        </Border>
        <ToggleButton x:Name="toggleAutoscroll" Grid.Row="2" Content="Autoscroll" IsChecked="False"/>
    </Grid>

    That's a lot of code, so let's break it down into chunks.

    The general approach I chose was to create the data table using the Grid element. Each data cell in the table is a cell in the GridThe table is itself inside of an enclosing grid with three rows: the header for the table goes in the first row, and the body of the table goes in the second row. There is a third row in this grid, too, which holds a toggle button for turning the autoscroll feature on and off. The height of the first and last rows are set to "Auto" so that the grid rows will automatically size to the height of their content. The middle row of the enclosing grid is set to a hieght of "*", which means that the data rows of the table it will fill up the remaining vertical space on the screen.

    By placing the table header inside of its own Grid I can fix the header at the top of the table so that it is always visible even as the user scrolls through the data. A side effect of this, however, is that the widths of each cell have to be explicitly sized: because the header is separate from the rows of data, a width of "Auto" is just not feasible. The layout engine can only autosize based on the content of the headers, and can't account for differences in the column widths of the actual data rows.

    There are two ways of solving this. The first is to dynamically size the Grid columns in the program as data points are added, and the second is to fix the cell widths using static values. The former is actually easier than it sounds, but for the purposes of simplicity I went with the latter. While it's certainly less flexible, it's not a huge issue since I have control over the font sizes, and what is displayed in the data cells. Thus I can choose column widths that are guaranteed to be large enough to display the data without wasting a great deal of screen space.

    The body of the table is enclosed within a ListView element with vertical scrolling enabled. Use of ListView lets me bind a data object to a template via ListView.ItemTemplate and DataTemplate, but has yet another side effect: each table row has to be its own Grid element, because there can be only one child object inside of the template. Since the table header is already its own grid with fixed-size columns, though, I've already paid that price.

    The data bindings

    The use of a template inside the ListView element allows me to create static XAML that expands to a dynamic list of elements during execution. The DataTemplate is what maps variables in MainPage to the TextBlock elements inside the Grid. The first step for this sort of binding is to associate the ListView with the source object (for clarity, I have only listed the attributes necessary for the data binding below):

    <ListView x:Name="listPoints" ItemsSource="{Binding}">

    This creates an object named listPoints that is visible to the MainPage object, and binds its values to the ListView items. The listPoints object has a property called DataContext which defines the data elements. For this application, DataContext is set to an object of type ObservableCollection<Trackpoint>:

    points = new ObservableCollection<Trackpoint>();
    listPoints.DataContext = points;

    This configures listPoints to hold a collection of Trackpoint objects. When the template is expanded in the UI, each item in the ListView will have a trackpoint associated with it. The expansion of the Trackpoint object to the template is defined in the DataTemplate element. Again, for clarity, I'll reduce that to just the TextBlock elements inside the Grid:

    <TextBlock Grid.Column="0" Text="{Binding Timestamp}" Style="{StaticResource DataCell}"/>
    <TextBlock Grid.Column="1" Text="{Binding Latitude, Converter={StaticResource fs}, ConverterParameter={0:F6}}" Style="{StaticResource DataCell}"/>
    <TextBlock Grid.Column="2" Text="{Binding Longitude, Converter={StaticResource fs}, ConverterParameter={0:F6}}" Style="{StaticResource DataCell}"/>
    <TextBlock Grid.Column="3" Text="{Binding Accuracy}" Style="{StaticResource DataCell}"/>
    <TextBlock Grid.Column="4" Text="{Binding Altitude}" Style="{StaticResource DataCell}"/>
    <TextBlock Grid.Column="5" Text="{Binding AltitudeAccuracy}" Style="{StaticResource DataCell}"/>
    <TextBlock Grid.Column="6" Text="{Binding Speed}" Style="{StaticResource DataCell}"/>
    <TextBlock Grid.Column="7" Text="{Binding Heading}" Style="{StaticResource DataCell}"/>
    <TextBlock Grid.Column="8" Text="{Binding Precision}" Style="{StaticResource DataCell}"/>

    For most of these items, the data binding is just a simple binding to a property of the Trackpoint object, such as the timestamp, speed, heading, and so on. For the Latitude and Longitude properties, however, I employ a converter.

    The purpose of the converter is to change the defaul display behavior of the data that is printed. The Latitude and Longitude properties are both floating point values, and by default they get displayed in exponential notation which can be difficult to read at-a-glance. The converter is used to change the default formatting using the "F6" converter to String.Format, which prints the number followed by six decimal places which is sufficient for this purpose (a degree of longitude at the equator is about 111 km, so .0000001 degrees corresponds to just over 0.1 meter which is more precision than a consumer-grade GNSS can prdouce).

    The converter function is defined by linking a namespace key to a class definition:

    <local:FormatString x:Key="fs"/>

    And the convert function itself is in FormatString.cs.

    public object Convert(object value, Type type, object param, String lang)
    {
        String fmt = param as String; // use "as String" so fmt= NULL if param is not a string
        CultureInfo culture;
                
    
        if ( String.IsNullOrEmpty(fmt) ) return value.ToString();
    
        culture = new CultureInfo(lang);
                
        if (culture != null) return String.Format(culture, fmt, value);
    
        return String.Format(fmt, value);
    }

    The format string is passed as the parameter object to the function, and the function uses that to call String.Format.

    All that's left now is to add items to the listPoints object as track points are logged. This takes place inside of the update_position callback. Remember, the DataContext for listPoints is set to the points collection, so we add each trackpoint to points. I also automatically scroll to the bottom of the table if autoscrolling has been turned out.

    if (logged)
    {
        // Add the point to our table
        points.Add(t);
    
        ...
    
        if ((Boolean)toggleAutoscroll.IsChecked)
        {
            listPoints.UpdateLayout();
            listPoints.ScrollIntoView(listPoints.Items.Last());
        }
    }

    ← Part 4: Bing Maps IntegrationPart 6: The Export Class →
  • geolocation gps gnss
  • Icon Image: 

    Meshcentral.com - Now with Intel AMT certificate activation

    $
    0
    0

    I just added certificate based Intel AMT cloud activation support (TLS-PKI) in Meshcentral.com that works behind NAT’s and HTTP proxies, uses a reusable USB key and makes use of Intel AMT one-time-password (OTP) for improved security.

    Ok, let’s back up a little. Computers with Intel AMT need the feature activated before it can be used. Historically it’s been difficult to setup the software, network, certificates and settings to start activating Intel AMT, especially for smaller businesses in a way that allows administrators to use all of its features. It’s even more difficult if all the computers are mobile. With Mesh, we want to put all of the Intel AMT activation in the cloud, so administrators don’t need to worry about the how it all works. Administrators can launch their own instance of Mesh on Amazon AWS, install the mesh agent on each their machines and, when time permits create and use a single USB key to touch each machine for Intel AMT activation.

    Meshcentral.com will automatically detect when a computer can be activated and do all of the appropriate work in the background, and this, even behind a HTTP proxy or NAT/double-NAT routers. Mesh fully supports Intel AMT Client Initiated Remote Access (CIRA) so once activated, Intel AMT can call back to the Mesh server independent of OS state. Administrators can then use the web site or tools like Manageability Commander Mesh Edition to use Intel AMT features across network obstacles. Mesh will automatically route traffic using direct, relay or CIRA, so administrators don’t never need to worry about how to connect to a machine over the Internet. As an aside, Mesh fully supports Host Based Provisioning, so that is still an available option if you don’t want to touch using a USB key and are ok with the client mode limitations.

    A full video demonstration is available here.

    Enjoy!
    Ylian
    https://meshcentral.com

  • Mesh
  • MeshCentral
  • MeshCentral.com
  • AMT
  • Intel AMT
  • vPro
  • Intel vPro
  • activation
  • tls
  • TLS-PKI
  • PKI
  • Ylian
  • Developers
  • Partners
  • Professors
  • Students
  • Android*
  • Apple Mac OS X*
  • Linux*
  • MeeGo*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Moblin*
  • Tizen*
  • Unix*
  • Android*
  • Business Client
  • Cloud Services
  • HTML5
  • Server
  • Windows*
  • Intel® AMT Software Development Kit
  • Intel® Active Management Technology
  • Cloud Computing
  • Development Tools
  • Education
  • Embedded
  • Enterprise
  • Geolocation
  • Healthcare
  • Intel® Atom™ Processors
  • Intel® Core™ Processors
  • Intel® vPro™ Technology
  • Microsoft Windows* 8 Desktop
  • Microsoft Windows* 8 Style UI
  • Mobility
  • Open Source
  • Power Efficiency
  • Security
  • Sensors
  • Small Business
  • Touch Interfaces
  • User Experience and Design
  • Laptop
  • Phone
  • Server
  • Tablet
  • Desktop
  • URL
  • Viewing all 82 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>