Quantcast
Channel: Sensors
Viewing all 82 articles
Browse latest View live

Retargeting a BayTrail* Windows* 8 Store Sample Application to Windows 8.1

$
0
0

Download Article

Download Retargeting a BayTrail* Windows* 8 Store Sample Application to Windows 8.1 [PDF 602KB]

Abstract

This article discusses the process of retargeting an existing healthcare sample app from Windows 8 to Windows 8.1. In particular, using special features that were added to Visual Studio 2013 for retargeting/importing windows store apps, build errors, and challenges faced. In addition, using 3rd party libraries and newly available UI controls & features are discussed.

Overview

Overview

Windows 8.1 brings new features, APIs, UI, and performance enhancements to the windows platform. It is up to the developer to take advantage of new features and re-implement some parts of an app that will benefit from the new Windows 8.1 enhancements. Even a simple retarget and compile can result in benefits such as quick app startup time and automatic windows store app updates.

This article will look at migrating a sample healthcare app to Windows 8.1.

For an in-depth and general technical discussion of migrating a Windows Store App to Windows 8.1, please refer to the following white paper.

http://msdn.microsoft.com/en-us/library/windows/apps/dn376326.aspx

Retargeting Windows Store Apps to Windows 8.1

Depending on the functionality and complexity of your app, the process of retargeting Windows Store Apps to Windows 8.1 is relatively a straight forward process.

Developers can plan their retargeting process to be incremental. Initially, the app can be recompiled to resolve any build errors so the app takes advantage of the Windows 8.1 platform. Subsequently, developers can review any functionalities in the app that will benefit from re-implementation using the newly available APIs. Finally, the retargeting process gives the developer an opportunity to review the compatibility of 3rd party libraries in Windows 8.1.

When the healthcare sample app was migrated to Windows 8.1, the simple recompile option, checking usage of 3rd party libraries, and re-implementing the settings control using the new Windows 8.1 XAML control was performed.

For reference, Microsoft Developer Network has extensive documentation that covers all facets of migrating an app to Windows 8.1. Please refer the following link.

http://msdn.microsoft.com/en-us/library/windows/apps/dn263114.aspx

A Healthcare Windows Store App

As seen in several other articles in this forum, we will use a sample healthcare Line of Business Windows Store app.

Some of the previous articles include:

The application allows the user to login to the system, view the list of patients (Figure 1), access patient medical records, profiles, doctor’s notes, lab test results, and vital graphs.


Figure 1: The “Patients” page of the Healthcare Line of Business app provides a list of all patients. Selecting an individual patient provides access to the patient’s medical records.

Retargeting sample healthcare app

Before the sample app is retargeted, it is useful to review different components, UI features, and 3rd party libraries that are used.

The UI and the core app life cycle handling of the app were implemented using the templates available in Windows 8. Windows 8.1updated the project & page templates and included a brand new Hub pattern template. The app uses sqlite* as its backend database to store all patient records. WinRTXamlToolkit* is used for charts and the 3rd party library Callisto* is used for implementing the Settings Control. The Settings Control is invoked by the charms bar. Windows 8.1 has a new XAML based settings controls that can be used instead of a 3rd party library.

The app has search functionality implemented using the charms bar integrated in Windows 8. Windows 8.1 has a new in-app search UI control that could be used to extend the search experience to different UI pages of the app, depending on the requirements.

The sample app has several other functionalities like camera usage, NFC and audio recording that will continue to function in Windows 8.1 without any changes.

As mentioned earlier, Visual Studio 2013 was used to recompile the sample app for Windows 8.1, 3rd party library build issues were fixed, and parts of the app were re-implemented using new Windows 8.1 features. The app can be refined further in the future by re-implementing more pieces of the app that benefit from the Windows 8.1 platform. For example, new page templates, view models for different screen sizes, the new in-app search, or the new tile sizes and templates could be utilized.

Using Visual Studio 2013 to import the app

To retarget the app for windows 8.1, first download and install Visual Studio 2013 on a Windows 8.1 host machine. After the installation, ensure any 3rd party libraries are updated to the latest version inside the Visual studio extensions.

The project was opened in Visual Studio 2013 and no errors were seen when compiling the project out of the box. To retarget the project to Windows 8.1, right-click the project name in the solutions explorer and the option for retargeting the project to Windows 8.1 is shown in the list (Figure 2)


Figure 2: Option for retargeting the project (captured from Visual Studio* 2013)

Clicking on this option brings up a dialog box asking for confirmation. Verify that the project selected is correct and press the OK button.


Figure 3: Confirmation Dialog for retargeting (captured from Visual Studio 2013*)

After Visual Studio completes the action, you should see the project now has “Windows 8.1” next to the project name in the solution explorer (Figure 4).


Figure 4: Solution Explorer shows the project is retargeted to Windows 8.1 (captured from Visual Studio 2013*)

When trying to compile the project, build errors may occur. Figure 4 also shows some 3rd party libraries showing build issues. In the next section, resolving build errors and 3rd party library issues is discussed.

Fixing build errors and 3rd party library issues

Updating the 3rd party libraries to latest version resolves some of the problems. The Visual Studio* extensions dialog can be used to check for the latest library version available. For Sqlite*, it was updated to the Windows 8.1 version as shown Figure 5.


Figure 5: Extensions dialog (captured from Visual Studio 2013*)

The usage of some of the 3rd party libraries in the app were re-evaluated after migrating to Windows 8.1. As mentioned earlier, the Settings UI control has been added to Windows 8.1. It was decided that it would be best to remove the 3rd party library Callisto* from the app and utilize the native Windows 8.1 control. To migrate to the native control, all source code references to the Callisto* library were removed. Please see Figure 6 to see the updated project references in the solutions explorer.


Figure 6: Project references in sample app after retargeting to Windows 8.1* (captured from Visual Studio 2013*)

WinRTXamlToolkit* is still being utilized for charts and other features, so it has been updated to the Windows 8.1 version.

By using the newly available Windows 8.1 XAML settings control, the app maintains the same look and feel as when using the 3rd party library. Figure 7 shows the settings control in design mode.


Figure 7: Settings Flyout UI in design mode (captured from Visual Studio 2013*)

Using the new XAML based settings control, SettingsFlyout, is similar to other XAML controls. The following snippet shows the XAML code used for the sample app’s settings UI.

<SettingsFlyout
    x:Class="PRApp.Views.PRSettingsFlyout"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:local="using:PRApp.Views"
    xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
    xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
    mc:Ignorable="d"
    IconSource="Assets/SmallLogo.png"
    Title="PRApp Options"
    HeaderBackground="{StaticResource StandardBackgroundBrush}"
    d:DesignWidth="346"
    xmlns:c="using:PRApp.ViewModels">

    <SettingsFlyout.Resources>
        <c:SessionSettingsViewModel x:Key="myDataSource"/>
    </SettingsFlyout.Resources>
    <!-- This StackPanel acts as a root panel for vertical layout of the content sections -->
    <StackPanel VerticalAlignment="Stretch" HorizontalAlignment="Stretch">


        <StackPanel Orientation="Horizontal"  Margin="5">
            <TextBlock Text="{Binding Source={StaticResource myDataSource}, Path=SessionSettings.Loginuser.Login}" Margin="0,0,5,0" />
            <TextBlock Text="{Binding Source={StaticResource myDataSource}, Path=SessionSettings.Loginuser.Loginmsg}" />
        </StackPanel>
        <Button Content="User Home" Margin="5" Command="{Binding Source={StaticResource prAppUtil}, Path=UserHomeCmd}"/>
        <Button Content="Logout" Margin="5" Click="Button_Click_1" />
        <ToggleSwitch Header="Show Deceased Patients" Margin="5" IsOn="{Binding Mode=TwoWay, Source={StaticResource myDataSource}, Path=SessionSettings.ShowDeceased}"/>
        <StackPanel Orientation="Horizontal"  Margin="5">
            <ToggleSwitch Header="Use Cloud Service" Margin="5,0,0,0" IsOn="{Binding SessionSettings.UseCloudService, Mode=TwoWay, Source={StaticResource myDataSource}}"/>
        </StackPanel>
        <StackPanel Orientation="Vertical"  Margin="5">
            <TextBlock Margin="5" FontSize="14" Text="Server Address:" Width="97" HorizontalAlignment="Left" VerticalAlignment="Center" />
            <TextBox HorizontalAlignment="Stretch" FontSize="12" Margin="5" Text="{Binding SessionSettings.ServerUrl, Mode=TwoWay, Source={StaticResource myDataSource}}"  />
        </StackPanel>


        <Button HorizontalAlignment="Right"  Click="Button_Click_Test_Connection" Content="Test Connection"/>
        <TextBlock  TextWrapping="Wrap"  x:Name="StatusText" HorizontalAlignment="Left" Text="{Binding SessionSettings.TestConnectionStatus, Source={StaticResource myDataSource}}"  />

    </StackPanel>
</SettingsFlyout>

Figure 8: XAML code snippet for settings flyout in sample app

Configuring and initializing the SettingsFlyout is done in the app’s main point of entry (App.xaml.cs file). SettingsFlyout is added to the ApplicationCommands collection. Please see the following code snippet in Figure 9 for reference.

protected override void OnWindowCreated(WindowCreatedEventArgs args)
{
    Windows.UI.ApplicationSettings.SettingsPane.GetForCurrentView().CommandsRequested += Settings_CommandsRequested;
}

void Settings_CommandsRequested(Windows.UI.ApplicationSettings.SettingsPane sender, Windows.UI.ApplicationSettings.SettingsPaneCommandsRequestedEventArgs args)
{
    Windows.UI.ApplicationSettings.SettingsCommand PRSettingsCmd =
        new Windows.UI.ApplicationSettings.SettingsCommand("PRAppOptions", "PRApp Options", (handler) =>
        {
            PRSettingsFlyout PRSettingsFlyout = new PRSettingsFlyout();
            PRSettingsFlyout.Show();

        });

    args.Request.ApplicationCommands.Add(PRSettingsCmd);
}

Figure 9: Code snippet showing the settings flyout initialization

SettingsFlyout is an excellent feature in Windows 8.1 that iseasy to use and it comes with all the design best practices recommended for Windows Store Apps. In addition, the effort to transition to this native control was painless.

Summary

This article discussed retargeting a sample health care Windows Store App from Windows 8 to Windows 8.1. The steps involved in the retargeting process were covered in detail with relevant screenshots and code snippets. The article concluded with a discussion about replacing a 3rd party library with a native Windows 8.1 control.

Intel, the Intel logo are trademarks of Intel Corporation in the US and/or other countries.

Copyright © 2013 Intel Corporation. All rights reserved.

*Other names and brands may be claimed as the property of others.

++This sample source code is released under the Intel OBL Sample Source Code License (MS-LPL Compatible), Microsoft Limited Public License, and Visual Studio* 2013 License.

  • ULTRABOOK™
  • applications
  • Windows 8.1
  • XAML
  • NFC
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Graphics
  • Microsoft Windows* 8 Desktop
  • Sensors
  • User Experience and Design
  • Laptop
  • Tablet
  • URL

  • Implementing multi-user multi-touch scenarios using WPF in Windows* 8 Desktop Apps

    $
    0
    0

    Downloads

    Implementing multi-user multi-touch scenarios using WPF in Windows* 8 Desktop Apps [PDF 602KB]
    Multiuser-Multitouch-Codesample.zip [ZIP 206KB]

    Summary

    In this paper we walk through a sample application (in this case a game that quizzes people on the Periodic Table) that enables multi-user, multi-touch capability and is optimized for large touchscreen displays. By using User Controls and touch events, we can enable a scenario where multiple users can play the game at the same time.

    Windows Presentation Foundation (WPF) provides a deep touch framework that allows us to handle low-level touch events and support a multitude of scenarios from simple touch scrolling to a multi-user scenario. This game has two areas where users can touch, scroll, and click using their fingers simultaneously while the remainder of the UI remains responsive. Finally, this application was designed and built using XAML and C# and follows the principles of the Model-View-ViewModel software development pattern.

    Supporting Large Touch Displays and multiple users in Windows Presentation Foundation

    WPF is an excellent framework for building line-of-business applications for Windows desktop systems, but it can also be used to develop modern and dynamic applications. You can apply many of the same principles you use for designing applications in WPF with some small tweaks to make them friendly and easy to use on a large format display.

    The XAML markup language has, as a foundational principle, lookless controls. This means that the appearance and styling of a control is separate from the control’s implementation. The control author may provide a default style for the control, but this can easily be overridden. If you place a style in your XAML (inferred or explicit), it will append the base style that ships with the framework. You can also use the template extraction features in Visual Studio* 2012 to make a copy of styles and templates that ship with the .NET framework to use as a starting point.

    Let’s look at an example:

    To create a window with a custom close button, I created an empty WPF project in Visual Studio and edited the MainWindow.xaml file as follows:

    <Window x:Class="ExampleApplication.MainWindow"
            xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
            xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
            Title="MainWindow" Height="350" Width="525" WindowStyle="None">
        <Grid>
            <Button HorizontalAlignment="Right" VerticalAlignment="Top" Content="Close" Click="Close_Window" />
        </Grid>
    </Window>
    

    I then wrote a C# method to handle closing the window:

            private void Close_Window(object sender, RoutedEventArgs e)
            {
                this.Close();
            }
    

    This created a Window like the one below:

    Since we are on the Windows 8 platform, we can use the Segoe UI Symbol font to put the close symbol in the button. You can browse for the symbol you want to use in the Windows Character Map under the Segoe UI Symbol font:

    Now that I have the character code, I can begin customizing the button. To start, I added the close symbol to the button:

    <Button HorizontalAlignment="Right" VerticalAlignment="Top" FontFamily="Segoe UI Symbol" Content="" Click="Close_Window" />
    

    I also want to style the button to make it touch-friendly by applying an XAML style. This can be done by creating an inherit style that is anywhere above the button in its Visual hierarchy. I will add the Button style to the Window’s resources so that it’s available to any button within the Window:

    <Style TargetType="Button">
                <Setter Property="BorderBrush" Value="White" />
                <Setter Property="Background" Value="Transparent" />
                <Setter Property="Foreground" Value="White" />
                <Setter Property="BorderThickness" Value="2" />
                <Setter Property="Padding" Value="12,8" />
                <Setter Property="FontSize" Value="24" />
                <Setter Property="FontWeight" Value="Bold" />
            </Style>
    

    To illustrate this effect, I changed the Window’s background color to white. The above style will result in a button that appears like this:

    You can always change the style to have a larger icon and less padding, for example. With buttons and text content, you may find yourself using static padding, margin, and size values since they rarely change. If you want text content to be truly responsive, you can always put text content in a VIewBox so that it scales in size relative to the Window. This isn’t necessary for most large-screen applications, but it is something to consider if your application will operate at very extreme resolutions.

    For most UI elements, you will want to base your padding and margins off of relative sizes. This can be accomplished by using a Grid as your layout system. For example, in the demo application, we wanted a very thin amount of space around each periodic table element. I could use a 1px padding around each item, but the appearance of the width of that padding will differ between users on large displays and small displays. You also have to consider that your end users might be using much larger monitors and resolutions than your development environment may support. To resolve this issue, I use the grids to create rows and columns to represent the padding. For example, I can create a grid with 3 rows and 3 columns like below:

    <Grid x:Name="tableRoot">
                <Grid.RowDefinitions>
                    <RowDefinition Height="0.01*"/>
                    <RowDefinition Height="0.98*"/>
                    <RowDefinition Height="0.01*"/>
                </Grid.RowDefinitions>
                <Grid.ColumnDefinitions>
                    <ColumnDefinition Width="0.01*"/>
                    <ColumnDefinition Width="0.98*"/>
                    <ColumnDefinition Width="0.01*"/>
                </Grid.ColumnDefinitions></Grid>
    
    

    In grid definition sizing you have three options available. You can do static sizing using an absolute height or width, auto sizing that depends on the content to measure and determine size or relative sizing or you can mix and match the different options. In our example, we make heavy use of the relative sizing. The XAML engine sums the values for the relative sizing and assigns sizing that is equivalent to the ratio of that individual value to the whole. For example, if you have columns sized like below:

    <Grid.ColumnDefinitions>
                    <ColumnDefinition Width="4*"/>
                    <ColumnDefinition Width="7*"/>
                    <ColumnDefinition Width="9*"/>
                </Grid.ColumnDefinitions>
    
    

    The sum of the column widths (4, 7, and 9) is 20. So each width is the ratio of each value to the total of 20. The first column would be 4/20 (20%), the second column would be 7/20 (35%), and the final column would be 9/20 (45%). While this works fine, it’s considered a good practice to have all of your columns (or rows) sum up to either 100 or 1 for simplicity’s sake. In the first example, we make sure that the heights and widths add up to a value of 1. The column and row indexes are zero-based so we can put the content in Column 1 and Row 1 and it will have a 1% padding all around. This is 1% regardless of the resolution and will appear relatively the same to users regardless of their resolution. A padding set to a static size will be much thinner on a large touchscreen display with a high resolution than you expect it to be during development. In the periodic table application, you can see this 1% padding when browsing the table itself:

    You can also enable touch scrolling for your application to make it more responsive. Out of the box, WPF allows you to use your finger to scroll through a list element. The ScrollViewer does lock your scrolling to each element so it’s more like flicking between elements. If you want to enable “smooth” scrolling, you should set the PanningMode of the ScrollViewer. By default, the PanningMode is set to None. By setting it to VerticalOnly or HorizontalOnly, you will enable smooth scrolling through items in a list view. In the Periodic table application, the ScrollViewer.PanningMode attached property is used to enable this scenario on a typical ListView. I also set the ScrollViewer.CanContentScroll property to false so that the items will not snap and the user can use their finger to hover between items.

    <ListView x:Name="SecondBox" Background="Transparent" ItemsSource="{Binding Source={StaticResource PeriodicData}}" 
                      ScrollViewer.VerticalScrollBarVisibility="Disabled" 
                      ScrollViewer.HorizontalScrollBarVisibility="Visible"
                      ScrollViewer.PanningMode="HorizontalOnly" 
                      ScrollViewer.CanContentScroll="False"></ListView>
    

    The ListView mentioned is used in the application for viewing Periodic table items like below:

    Finally, WPF allows us to use the built-in touch support that has been around since Windows 7. Windows recognizes touch input as a mouse when you don’t specifically handle the touch events such as Tapped, ManipulationDelta, and ManipulationEnded. This allows you to handle the event where users tap any of the above items by using the Click event handler. This also minimizes the amount of code necessary to support both touch and a mouse.

    Since touch support is implemented on a very low-level, the WPF platform does not group touches by user or clusters. To get around this, you typically see control authors use visual cues (such as a border or a box) to indicate to users that they should touch within a specific area. To support multiple users, we can put the touch-supported controls within a UserControl. The browsable Periodic table that is used to find the Periodic elements as part of this game is a UserControl so we can put as many or as few as we want on a screen by putting the logic into a UserControl.

    The Model-View-ViewModel Pattern

    When building the application, it would be easy to write the code in the xaml.cs file and call it a day, but we want to maximize code reuse and build an application that is truly modular. We can accomplish this by leveraging the MVVM design pattern. In the Periodic Table application, every screen is bound to a ViewModel. This holds information for data-binding and controls the behaviors of the different Views. We also have a data source that uses XAML and need to manipulate the data source to run the game. The data source will be discussed in greater detail later in this article.

    Since MVVM is a popular design pattern, it is possible to use it in the WPF, Windows Store, and Windows Phone platforms. To support this scenario, we can put our Models and ViewModels into Portable Class Libraries (PCLs) that can be referenced by all of those platforms. The PCLs contain the common functionality and namespaces between all of those platforms and allow you to write cross-platform code. Many tools and libraries (such as Ninject, PRISM’s EventAggregator, and others) are available via NuGet and can be referenced in a PCL so you can create large-scale applications. If you need to support a new platform, you simply create new Views and reference the existing ViewModels and Models.

    This application is parsing a static data file that contains information about how to render the Periodic table. The Models are aware of the classes in WPF so PCLs would not be appropriate in this example.

    In this application, we use the PRISM framework to leverage the already well-built modules for MVVM development.

    For the home page, we have a BaseViewModel that has one command. The ExitCommand closes the application when executed. We can bind this command to the button mentioned earlier in the article by applying a data binding to the Button’s Command dependency property.

        public class BaseViewModel : NotificationObject
        {
            public BaseViewModel()
            {
                this.ExitCommand = new DelegateCommand(ExitExecute);
            }
    
            public DelegateCommand ExitCommand { get; private set; }
    
            private void ExitExecute()
            {
                Application.Current.Shutdown();
            }
        }
    

    First, the ViewModel inherits from PRISM’s NotificationObject class. This class contains all of the logic to let the View know when a ViewModel’s property is updated. This is accomplished by implementing the INotifyPropertyChanged interface. If you ever want to look at a very solid best-practices implementation of INotifyPropertyChanged, view the source code for the PRISM project to see how the team at Microsoft implemented the interface.

    Next, we use the DelegateCommand class from the PRISM framework. DelegateCommand is an implementation of the ICommand interface that is the heart of commanding in WPF. This class can be used to handle a button’s click event and the logic for determining whether a button is enabled. This support not only applies to buttons, but is the primary case when the ICommand is used.

    In our BaseViewModel class, we create a new instance of the DelegateCommand class and pass in the ExitExecute action to be executed when the Command is invoked (by pressing the button).

    Because you can close the application from any screen, all of the other pages inherit from the BaseViewModel class. To keep all of the game-related logic together, both the 1-player and 2-player games use ViewModels that inherit from a GameViewModel class which in-turn inherits from BaseViewModel.

    The GameViewModel class implements publically accessible properties that are used in a game. Below are a couple of example fields that are shown on a game screen:

    For example, we have a RoundTimeLeft property that shows how much time you have left in a round. The property is of type TimeSpan and it uses a private backing field. When you set the property, we use a method of the NotificationObject class to notify the View layer that a ViewModel property has been updated.

            private TimeSpan _roundTimeLeft;
            public TimeSpan RoundTimeLeft
            {
                get { return _roundTimeLeft; }
                private set
                {
                    _roundTimeLeft = value;
                    RaisePropertyChanged(() => RoundTimeLeft);
                }
            }
    
    

    This is especially useful in situations where you want the View to refresh multiple properties when you update a single field/property. Also, as a performance improvement for advanced applications, it is very common to check if the value has been changed before notifying the view that your property has changed. Below is an example of the HintItem property and Hint property that are used in the ViewModel. The Hint property is the symbol that is shown in the center, and we want to update that text anytime we store a new HintItem in the ViewModel. This is done by letting the View know that the Hint property has been updated:

            private PeriodicItem _hintItem;
            public string Hint
            {
                get
                {
                    return this.HintItem != null ? this.HintItem.Abbreviation : string.Empty;
                }
            }
    
            public PeriodicItem HintItem
            {
                get { return _hintItem; }
                private set
                {
                    _hintItem = value;
                    RaisePropertyChanged(() => Hint);
                    RaisePropertyChanged(() => HintItem);
                }
            }
    

    The Model-View-ViewModel pattern is very powerful and allows testability and expanded code re-use when working with an application. The pattern is also applicable whether you are working on a line-of-business application or a touch application. The GameViewModel class uses a timer and a loop to handle the execution of the game. Both OnePlayerViewModel and TwoPlayersViewModel  inherit from the GameViewModel and add specific logic for each type of game. The application also has a DesignGameViewModel that has a set of static properties so that we can see how the game will look at design time without having to run the application:

    Tips & Tricks for building immersive applications in WPF

    There are a couple of XAML tricks that are used throughout this application to make it visually appealing and touch friendly. Some are very common, but there are a couple worth highlighting as they use some of the best features of WPF and XAML.

    First, the PeriodicTable itself is a WPF UserControl. This allows maximum code re-use as the control can simply be placed on any WPF Window. Within the control, Dependency Properties are used so that you can set features of the control and expose those features externally for data-binding. For example, the PeriodicTable has two states. ZoomedOut is when you see the entire table:

    ZoomedIn is when you see the detailed list. When clicking on a Periodic Group from the ZoomedOut view, the game jumps to that group on the ZoomedIn list. There is also a button in the bottom-right corner to zoom back out:

    To implement this, there are two list views representing each of the “Views.” A dependency property is created that will expose a property that anybody can set. A PropertyChanged event handler is then created so that the control can respond to changes from both code and data-bindings all in one location:

            public static readonly DependencyProperty IsZoomedInProperty = DependencyProperty.Register(
                "IsZoomedIn", typeof(bool), typeof(PeriodicTable),
                new PropertyMetadata(false, ZoomedInChanged)
            );
    
            public bool IsZoomedIn
            {
                get { return (bool)GetValue(IsZoomedInProperty); }
                set { SetValue(IsZoomedInProperty, value); }
            }
    
            public void SetZoom(bool isZoomedIn)
            {
                if (IsZoomedIn)
                {
                    FirstContainer.Visibility = Visibility.Collapsed;
                    SecondContainer.Visibility = Visibility.Visible;
                }
                else
                {
                    FirstContainer.Visibility = Visibility.Visible;
                    SecondContainer.Visibility = Visibility.Collapsed;
                }
            }
    

    This dependency property is used in the TwoPlayerView so that we can bind the Second Player’s zoomed in state to a Boolean in the ViewModel called PlayerTwoZoomedIn:

    <local:PeriodicTable x:Name="playerTwoTable" IsZoomedIn="{Binding PlayerTwoZoomedIn, Mode=TwoWay}"></local:PeriodicTable>
    

    This implementation loses the flexibility to tie custom features from the control to anything in the ViewModel. In our application, we need to set PlayerTwoZoomedIn (and PlayerOneZoomedIn) to false when a round or the game is reset.

    XAML is also heavily used to store the data in this application. While a database or a text file could be created, it seemed to be much more readable to store the Periodic table’s data as XAML. Since XAML is just an XML representation of CLR objects, we could create model classes and corresponding XAML elements. We can then store this in a XAML resource dictionary and load it as data at runtime (or design time if you wish).

    For example, we have a class for PeriodicItems that has a very simple definition and is represented by even simpler XAML:

        public class PeriodicItem
        {
            public string Title { get; set; }
    
            public string Abbreviation { get; set; }
    
            public int Number { get; set; }
        } 
    
    <local:PeriodicItem Abbreviation="Sc" Title="Scandium" Number="21" />
    <local:PeriodicItem Abbreviation="Ti" Title="Titanium" Number="22" />
    

    This made defining the Periodic table easy and readable. You can find all of the Periodic elements used in the application in the PeriodicTableDataSource.xaml file located in the Data folder. Here is an example of a Periodic Group defined in that file.

    <local:PeriodicGroup Key="Outer Transition Elements">
                    <local:PeriodicGroup.Items>
                        <local:PeriodicItem Abbreviation="Ni" Title="Nickel" Number="28" />
                        <local:PeriodicItem Abbreviation="Cu" Title="Copper" Number="29" />
                        <local:PeriodicItem Abbreviation="Zn" Title="Zinc" Number="30" />
                        <local:PeriodicItem Abbreviation="Y" Title="Yttrium" Number="39" />
                    </local:PeriodicGroup.Items>
                </local:PeriodicGroup>
    

    Because of this, the Periodic data is dynamic and can be modified by simply updating the .xaml file. You can also use the same data in both design and runtime Views since it’s compiled and available as a resource for XAML.

    Summary

    Building an application that supports a large amount of data and advanced touch scenarios is definitely possible in the Windows 8 desktop environment. XAML is a powerful markup language that allows you to not only define dynamic views, but also model your data in a common format that is very easy to read, understand, and parse. You can build touch applications today using the mature WPF platform.

    Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

    Copyright © 2013 Intel Corporation. All rights reserved.

    *Other names and brands may be claimed as the property of others.

  • ULTRABOOK™
  • applications
  • Model-View
  • Windows Presentation Foundation
  • XAML
  • c#
  • touch
  • WindowsCodeSample
  • Windows* 8
  • desktop
  • Multi-User
  • multi-touch
  • WPF
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Graphics
  • Microsoft Windows* 8 Desktop
  • Sensors
  • User Experience and Design
  • Laptop
  • Tablet
  • Desktop
  • URL
  • Sviluppo di applicazioni per i sensori di telefoni e tablet Android* basati sul processore Intel® Atom™

    $
    0
    0

    Sviluppo di applicazioni per i sensori di telefoni e tablet Android* basati sul processore Intel® Atom™


    Questa guida offre agli sviluppatori di applicazioni un'introduzione al framework dei sensori Android e tratta di come usare alcuni sensori che sono generalmente disponibili nei telefoni e nei tablet basati sul processore Intel® Atom™. Tra gli altri saranno trattati i sensori di movimento, di posizione e dell'ambiente. Questa guida parla anche dei servizi di localizzazione basati su GPS, sebbene il framework Android non classifichi strettamente il GPS come un sensore. Il contenuto esposto in questa guida si basa su Android 4.2, Jelly Bean.

    Sensori di telefoni e tablet Android* basati sul processore Intel® Atom™


    I telefoni e i tablet Android basati sui processori Intel Atom sono in grado di supportare una vasta gamma di sensori hardware. Questi sensori sono utilizzati per rilevare i cambiamenti di movimento e posizione e per riferire i parametri ambientali dell'ambiente. Lo schema a blocchi della Figura 1 mostra una possibile configurazione di sensori su un tipico dispositivo Android basato sul processore Intel Atom.


    Figura 1.  Sensori di un sistema Android basato sul processore Intel® Atom™

    Sulla base dei dati che riportano, i sensori Android possono essere classificati nelle classi e nei tipi indicati nella Tabella 1.

    Sensori di movimentoAccelerometro
    (TYPE_ACCELEROMETER)
    Misura le accelerazioni di un dispositivo in m/s2Rilevazione del movimento
    Giroscopio
    (TYPE_GYROSCOPE)
    Misura la velocità di rotazione di un dispositivoRilevazione della rotazione
    Sensori di posizioneMagnetometro
    (TYPE_MAGNETIC_FIELD)
    Misura la forza del campo geomagnetico terrestre in µTBussola
    Prossimità
    (TYPE_PROXIMITY)
    Misura la vicinanza di un oggetto in cmRilevamento di oggetti circostanti
    GPS
     (non è un tipo di sensore android.hardware)
    Ottiene la posizione geografica accurata del dispositivoRilevamento accurato della posizione geografica
    Sensori dell'ambienteALS
    (TYPE_LIGHT)
    Misura il livello di luce ambientale in lxControllo automatico della luminosità dello schermo
    BarometroMisura la pressione atmosferica dell'ambiente in mbarRilevamento dell'altitudine

    Tabella 1.  Tipi di sensori supportati dalla piattaforma Android
     

    Framework dei sensori Android


    Il framework dei sensori Android fornisce meccanismi per accedere ai sensori e ai dati dei sensori, ad eccezione del GPS, tramite i servizi di localizzazione Android. Ne discuteremo più avanti in questo articolo. Il framework dei sensori fa parte del pacchetto android.hardware. La Tabella 2 elenca le principali classi e interfacce del framework dei sensori.

    NomeTipoDescrizione
    SensorManagerClasseUtilizzata per creare un'istanza del servizio del sensore. Fornisce diversi metodi per accedere ai sensori, registrare e deregistrare i listener di eventi dei sensori e così via.
    SensoreClasseUtilizzata per creare un'istanza di un sensore specifico.
    SensorEventClasseUtilizzata dal sistema per pubblicare i dati del sensore. Comprende i valori dei dati non elaborati del sensore, il tipo di sensore, l'accuratezza dei dati, la data e l'ora.
    SensorEventListenerInterfacciaFornisce i metodi di callback per ricevere le notifiche di SensorManager quando i dati del sensore o la precisione del sensore cambiano.

    Tabella 2. Il framework di sensori della piattaforma Android

    Ottenere la configurazione del sensore

    I produttori dei dispositivi decidono i sensori da rendere disponibili sul dispositivo. È necessario scoprire quali sensori sono disponibili in fase di esecuzione effettuando una chiamata al metodo getSensorList() del SensorManager del framework di sensori usando il parametro “Sensor.TYPE_ALL”. L'esempio di codice 1 mostra un elenco dei sensori disponibili e le informazioni relative al fornitore, alla potenza e alla precisione di ciascun sensore.

    package com.intel.deviceinfo;
    	
    import java.util.ArrayList;
    import java.util.HashMap;
    import java.util.List;
    import java.util.Map;
    
    import android.app.Fragment;
    import android.content.Context;
    import android.hardware.Sensor;
    import android.hardware.SensorManager;
    import android.os.Bundle;
    import android.view.LayoutInflater;
    import android.view.View;
    import android.view.ViewGroup;
    import android.widget.AdapterView;
    import android.widget.AdapterView.OnItemClickListener;
    import android.widget.ListView;
    import android.widget.SimpleAdapter;
    	
    public class SensorInfoFragment extends Fragment {
    	
        private View mContentView;
    	
        private ListView mSensorInfoList;	
        SimpleAdapter mSensorInfoListAdapter;
    	
        private List<Sensor> mSensorList;
    
        private SensorManager mSensorManager;
    	
        @Override
        public void onActivityCreated(Bundle savedInstanceState) {
            super.onActivityCreated(savedInstanceState);
        }
    	
        @Override
        public void onPause() 
        { 
            super.onPause();
        }
    	
        @Override
        public void onResume() 
        {
            super.onResume();
        }
    	
        @Override
        public View onCreateView(LayoutInflater inflater, ViewGroup container,
                Bundle savedInstanceState) {
            mContentView = inflater.inflate(R.layout.content_sensorinfo_main, null);
            mContentView.setDrawingCacheEnabled(false);
    	
            mSensorManager = (SensorManager)getActivity().getSystemService(Context.SENSOR_SERVICE);
    	
            mSensorInfoList = (ListView)mContentView.findViewById(R.id.listSensorInfo);
    		
            mSensorInfoList.setOnItemClickListener( new OnItemClickListener() {
    			
                @Override
                public void onItemClick(AdapterView<?> arg0, View view, int index, long arg3) {
    				
                    // with the index, figure out what sensor was pressed
                    Sensor sensor = mSensorList.get(index);
    				
                    // pass the sensor to the dialog.
                    SensorDialog dialog = new SensorDialog(getActivity(), sensor);
    
                    dialog.setContentView(R.layout.sensor_display);
                    dialog.setTitle("Sensor Data");
                    dialog.show();
                }
            });
    		
            return mContentView;
        }
    	
        void updateContent(int category, int position) {
            mSensorInfoListAdapter = new SimpleAdapter(getActivity(), 
    	    getData() , android.R.layout.simple_list_item_2,
    	    new String[] {
    	        "NAME",
    	        "VALUE"
    	    },
    	    new int[] { android.R.id.text1, android.R.id.text2 });
    	mSensorInfoList.setAdapter(mSensorInfoListAdapter);
        }
    	
    	
        protected void addItem(List<Map<String, String>> data, String name, String value)   {
            Map<String, String> temp = new HashMap<String, String>();
            temp.put("NAME", name);
            temp.put("VALUE", value);
            data.add(temp);
        }
    	
    	
        private List<? extends Map<String, ?>> getData() {
            List<Map<String, String>> myData = new ArrayList<Map<String, String>>();
            mSensorList = mSensorManager.getSensorList(Sensor.TYPE_ALL);
    		
            for (Sensor sensor : mSensorList ) {
                addItem(myData, sensor.getName(),  "Vendor: " + sensor.getVendor() + ", min. delay: " + sensor.getMinDelay() +", power while in use: " + sensor.getPower() + "mA, maximum range: " + sensor.getMaximumRange() + ", resolution: " + sensor.getResolution());
            }
            return myData;
        }
    }

    Esempio di codice 1. Frammento che mostra l'elenco dei sensori **

    Sistema di coordinate dei sensori

    Il framework dei sensori riporta i dati dei sensori usando un sistema standard di coordinate su tre assi, in cui i valori di X, Y e Z sono rispettivamente rappresentati da values[0], values[1] e values[2] dell'oggetto SensorEvent.

    Alcuni sensori, come i sensori di luminosità, temperatura, prossimità e pressione, restituiscono solo valori singoli. Per questi sensori sono utilizzati solo i valori values[0] dell'oggetto SensorEvent.

    Altri sensori riportano i dati nel sistema standard di coordinate a tre assi. I sensori di questo tipo sono:

    • Accelerometro
    • Sensore della gravità
    • Giroscopio
    • Sensore del campo geomagnetico

    Il sistema di coordinate a tre assi del sensore è definito in relazione allo schermo del dispositivo secondo l'orientamento naturale (predefinito) del dispositivo. L'orientamento predefinito di un telefono è quello verticale, mentre l'orientamento naturale di un tablet è orizzontale. Quando il dispositivo è mantenuto nel suo orientamento naturale, l'asse x è orizzontale e punta verso destra, l'asse y è verticale e punta verso l'alto e l'asse z punta verso l'esterno del lato anteriore dello schermo. La Figura 2 mostra il sistema di coordinate del sensore per un telefono e la Figura 3 per un tablet.


    Figura 2. Sistema di coordinate del sensore per un telefono


    Figura 3.  Sistema di coordinate del sensore per un tablet

    Il punto più importante per quanto riguarda il sistema di coordinate del sensore è che il sistema di coordinate del sensore non cambia quando il dispositivo si muove o cambia il suo orientamento.

    Monitoraggio degli eventi del sensore

    Il framework dei sensori riporta i dati dei sensori con gli oggetti SensorEvent. Una classe può monitorare i dati di uno specifico sensore implementando l'interfaccia SensorEventListener e registrando in SensorManager il sensore specifico. Il framework dei sensori comunica alla classe i cambiamenti di stato dei sensori tramite i seguenti due metodi di callback SensorEventListener implementati dalla classe:

     

    onAccuracyChanged()

     

    e

     

    onSensorChanged()

     

    L'esempio di codice 2 implementa il SensorDialog utilizzato nell'esempio SensorInfoFragment che abbiamo trattato nella sezione "Ottenere la configurazione del sensore".

    package com.intel.deviceinfo;
    
    import android.app.Dialog;
    import android.content.Context;
    import android.hardware.Sensor;
    import android.hardware.SensorEvent;
    import android.hardware.SensorEventListener;
    import android.hardware.SensorManager;
    import android.os.Bundle;
    import android.widget.TextView;
    
    public class SensorDialog extends Dialog implements SensorEventListener {
        Sensor mSensor;
        TextView mDataTxt;
        private SensorManager mSensorManager;
    
        public SensorDialog(Context ctx, Sensor sensor) {
            this(ctx);
            mSensor = sensor;
        }
    	
        @Override
        protected void onCreate(Bundle savedInstanceState) {
            super.onCreate(savedInstanceState);
            mDataTxt = (TextView) findViewById(R.id.sensorDataTxt);
            mDataTxt.setText("...");
            setTitle(mSensor.getName());
        }
    	
        @Override
        protected void onStart() {
            super.onStart();
            mSensorManager.registerListener(this, mSensor,  SensorManager.SENSOR_DELAY_FASTEST);
        }
    		
        @Override
        protected void onStop() {
            super.onStop();
            mSensorManager.unregisterListener(this, mSensor);
        }
    
        @Override
        public void onAccuracyChanged(Sensor sensor, int accuracy) {
        }
    
        @Override
        public void onSensorChanged(SensorEvent event) {
            if (event.sensor.getType() != mSensor.getType()) {
                return;
            }
            StringBuilder dataStrBuilder = new StringBuilder();
            if ((event.sensor.getType() == Sensor.TYPE_LIGHT)||
                (event.sensor.getType() == Sensor.TYPE_TEMPERATURE)||
                (event.sensor.getType() == Sensor.TYPE_PRESSURE)) {
                dataStrBuilder.append(String.format("Data: %.3fn", event.values[0]));
            }
            else{         
                dataStrBuilder.append( 
                    String.format("Data: %.3f, %.3f, %.3fn", 
                    event.values[0], event.values[1], event.values[2] ));
            }
            mDataTxt.setText(dataStrBuilder.toString());
        }
    }

    Esempio di codice 2.Finestra di dialogo che mostra i valori dei sensori**

    Sensori di movimento

    I sensori di movimento vengono utilizzati per monitorare il movimento del dispositivo, come ad esempio scuotimenti, rotazioni, oscillazioni o inclinazioni. L'accelerometro e il giroscopio sono due sensori di movimento disponibili su molti dispositivi tablet e telefoni.

    I sensori di movimento segnalano i dati usando il sistema di coordinate del sensore, in cui i tre valori nell'oggetto SensorEvent, values[0], values[1] e values[2], rappresentano rispettivamente i valori delle assi x, y e z.

    Per capire i sensori di movimento e usarne i dati in un'applicazione, è necessario applicare alcune formule di fisica legate alla forza, alla massa, all'accelerazione, le leggi del moto di Newton e il rapporto nel tempo tra alcune di queste entità. Per ulteriori informazioni su queste formule e relazioni, fare riferimento ai libri di testo di fisica o a fonti di pubblico dominio.

    Accelerometro

    L'accelerometro misura l'accelerazione applicata al dispositivo e le sue proprietà sono riassunte nella Tabella 3.

     
    SensoreTipoDati
    SensorEvent (m/s2)
    Descrizione
    AccelerometroTYPE_ACCELEROMETERvalues[0]
    values[1]
    values[2]
    Accelerazione lungo l'asse x
    Accelerazione lungo l'asse y
    Accelerazione lungo l'asse z

    Tabella 3. L'accelerometro

    Il concetto di accelerometro deriva dalla seconda legge del moto di Newton:

    a = F/m

    L'accelerazione di un oggetto è il risultato della forza esterna netta applicata all'oggetto. Le forze esterne includono quella esercitata su tutti gli oggetti della terra, vale a dire la gravità. È proporzionale alla forza F applicata all'oggetto e inversamente proporzionale alla massa m dell'oggetto.

    Nel nostro codice, anziché utilizzare direttamente l'equazione di cui sopra, siamo più interessati al risultato che l'accelerazione ha sulla velocità e sulla posizione del dispositivo in un determinato periodo di tempo. La seguente equazione descrive la relazione tra la velocità v1 di un oggetto, la sua velocità originale v0, l'accelerazione a e il tempo t:

    v1 = v0 + at

    Per calcolare lo spostamento s della posizione dell'oggetto usiamo la seguente equazione:

    s = v0t + (1/2)at2

    In molti casi si comincia con la condizione v0 uguale a 0 (prima che il dispositivo inizi a muoversi), che semplifica l'equazione nel modo seguente:

    s = at2/2

    La forza di gravità, rappresentata con il simbolo g sottopone tutti gli oggetti della terra ad accelerazione gravitazionale. Indipendentemente dalla massa dell'oggetto, il valore di g dipende solo dalla latitudine della posizione dell'oggetto ed è compreso tra 9,78 e 9,82 (m/s2). Adottiamo un valore standard convenzionale di g:

    g = 9.80665 (m/s2)

    Poiché l'accelerometro restituisce i valori utilizzando il sistema di coordinate di un dispositivo multidimensionale, nel nostro codice possiamo calcolare le distanze lungo gli assi x, y e z utilizzando le seguenti equazioni:

    Sx = AxT2/2
    Sy=AyT2/2
    Sz=AzT2/2

    Dove Sx, Sy e Sz sono rispettivamente gli spostamenti sull'asse x, sull'asse y e sull'asse z, mentre Ax, Ay e Az sono rispettivamente le accelerazioni sull'asse x, sull'asse y e sull'asse z. Tè il tempo del periodo di misurazione.

    public class SensorDialog extends Dialog implements SensorEventListener {
        …	
        private Sensor mSensor;
        private SensorManager mSensorManager;
    	
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mSensor = mSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
        …
    }

    Esempio di codice 3. Istanza di un accelerometro**

    A volte non usiamo i valori dei dati di tutte e tre le dimensioni. Altre volte potremmo anche avere bisogno di prendere in considerazione l'orientamento del dispositivo. Ad esempio, per un'applicazione labirinto, usiamo solo l'accelerazione gravitazionale dell'asse x e dell'asse y per calcolare la direzione del movimento e la distanza della pallina in base all'orientamento del dispositivo. La logica è descritta nel seguente frammento di codice (esempio di codice 4).

    @Override
    public void onSensorChanged(SensorEvent event) {
        if (event.sensor.getType() != Sensor.TYPE_ACCELEROMETER) {
            return;
        } 
    float accelX, accelY;
    …
    //detect the current rotation currentRotation from its “natural orientation”
    //using the WindowManager
        switch (currentRotation) {
            case Surface.ROTATION_0:
                accelX = event.values[0];
                accelY = event.values[1];
                break;
            case Surface.ROTATION_90:
                accelX = -event.values[0];
                accelY = event.values[1];
                break;
            case Surface.ROTATION_180:
                accelX = -event.values[0];
                accelY = -event.values[1];
                break;
            case Surface.ROTATION_270:
                accelX = event.values[0];
                accelY = -event.values[1];
                break;
        }
        //calculate the ball’s moving distances along x, and y using accelX, accelY and the time delta
            …
        }
    }

    Esempio di codice 4.Considerare l'orientamento del dispositivo quando si utilizzano i dati dell'accelerometro in un gioco di labirinto**

    Giroscopio


    Il giroscopio calcola la velocità di rotazione del dispositivo intorno agli assi x, y e z, come mostrato nella Tabella 4. I valori dei dati del giroscopio possono essere positivi o negativi. Si supponga di guardare l'origine da una posizione lungo la metà positiva dell'asse, se la rotazione è in senso antiorario attorno all'asse, il valore è positivo, se la rotazione attorno all'asse è in senso orario, il valore è negativo. Possiamo anche determinare la direzione di un valore del giroscopio utilizzando la "regola della mano destra", illustrata nella Figura 4.


    Figura 4.  Uso della “regola della mano destra” per decidere il senso di rotazione positivo

    SensoreTipoDati
    SensorEvent (rad/s)
    Descrizione
    GiroscopioTYPE_GYROSCOPEvalues[0]
    values[1]
    values[2]
    Velocità di rotazione attorno all'asse x
    Velocità di rotazione attorno all'asse y
    Velocità di rotazione attorno all'asse z

    Tabella 4. Il giroscopio

    L'esempio di codice 5 mostra come creare un'istanza di giroscopio.

    public class SensorDialog extends Dialog implements SensorEventListener {
        …	
        private Sensor mGyro;
        private SensorManager mSensorManager;
    	
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mGyro = mSensorManager.getDefaultSensor(Sensor.TYPE_GYROSCOPE);
        …
    }

    Esempio di codice 5.Istanza di giroscopio**

    Sensori di posizione

    Molti tablet Android supportano due sensori di posizione: il magnetometro e i sensori di prossimità. Il magnetometro misura le forze del campo magnetico terrestre lungo gli assi x, y e z, mentre il sensore di prossimità rileva la distanza del dispositivo da un altro oggetto.

    Magnetometro

    L'uso più importante del magnetometro nei sistemi Android (descritto nella Tabella 5) è quello di implementare la bussola.

    SensoreTipoDati
    SensorEvent (µT)
    Descrizione
    MagnetometroTYPE_MAGNETIC_FIELDvalues[0]
    values[1]
    values[2]
    Intensità del campo magnetico terrestre lungo l'asse x
    Intensità del campo magnetico terrestre lungo l'asse y
    Intensità del campo magnetico terrestre lungo l'asse z

    Tabella 5. Il magnetometro

    L'esempio di codice 6 mostra come creare un'istanza di magnetometro.

    public class SensorDialog extends Dialog implements SensorEventListener {
        …	
        private Sensor mMagnetometer;
        private SensorManager mSensorManager;
    	
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mMagnetometer = mSensorManager.getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD);
        …
    }

    Esempio di codice 6.Istanza di magnetometro**

    Prossimità

    Il sensore di prossimità fornisce la distanza tra il dispositivo e un altro oggetto. Il dispositivo può utilizzare il sensore per rilevare se il dispositivo viene tenuto vicino all'utente (vedere la Tabella 6), determinare se l'utente sta facendo una telefonata e quindi spegnere lo schermo durante la telefonata.

    Tabella 6: Il sensore di prossimità
    SensoreTipoDati
    SensorEvent
    Descrizione
    ProssimitàTYPE_PROXIMITYvalues[0]Distanza da un oggetto in cm. Alcuni sensori di prossimità riportano solo un valore booleano per indicare se l'oggetto è abbastanza vicino.

    Nell'esempio di codice 7 viene illustrato come creare un'istanza di un sensore di prossimità.

    public class SensorDialog extends Dialog implements SensorEventListener {
        …	
        private Sensor mProximity;
        private SensorManager mSensorManager;
    	
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mProximity = mSensorManager.getDefaultSensor(Sensor.TYPE_PROXIMITY);
        …
    }

    Esempio di codice 7.Istanza di un sensore di prossimità**

    Sensori dell'ambiente

    I sensori dell'ambiente rilevano e riportano parametri ambientali del dispositivo, quali la luce, la temperatura, la pressione o l'umidità. Il sensore della luce ambientale (ALS) e il sensore della pressione (barometro) sono disponibili su molti tablet Android.

    Sensore della luce ambientale

    Il sensore della luce ambientale, descritto nella Tabella 7, viene utilizzato dal sistema per rilevare l'illuminazione dell'ambiente circostante e di conseguenza regolare automaticamente la luminosità dello schermo.

    Tabella 7: Il sensore della luce ambientale
    SensoreTipoDati
    SensorEvent (lx)
    Descrizione
    ALSTYPE_LIGHTvalues[0]L'illuminazione intorno al dispositivo

    L'esempio di codice 8 mostra come creare un'istanza di ALS.

    …	
        private Sensor mALS;
        private SensorManager mSensorManager;
    
        …	
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mALS = mSensorManager.getDefaultSensor(Sensor.TYPE_LIGHT);
        …

    Esempio di codice 8.Istanza di un sensore della luce ambientale**

    Barometro

    Le applicazioni possono utilizzare il sensore della pressione atmosferica (barometro), descritto nella Tabella 8, per calcolare l'altitudine della posizione corrente del dispositivo.

    Tabella 8: Il sensore della pressione atmosferica
    SensoreTipoDati
    SensorEvent (lx)
    Descrizione
    BarometroTYPE_PRESSUREvalues[0]La pressione atmosferica dell'ambiente in mbar

    L'esempio di codice 9 mostra come creare un'istanza di barometro.

    …	
        private Sensor mBarometer;
        private SensorManager mSensorManager;
    
        …	
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mBarometer = mSensorManager.getDefaultSensor(Sensor.TYPE_PRESSURE);
        …

    Esempio di codice 9.  Istanza di barometro**

    Linee guida per le prestazioni e l'ottimizzazione dei sensori

    Seguire queste procedure ottimali per utilizzare i sensori nelle applicazioni:

    • Controllare sempre la disponibilità del sensore specifico prima di utilizzarlo
      La piattaforma Android non richiede che il dispositivo includa o escluda sensori specifici. Prima di utilizzare un sensore nell'applicazione, occorre sempre prima verificare se il sensore è effettivamente disponibile.
    • Deregistrare sempre i listener dei sensori
      Se l'attività che implementa il listener del sensore diventa invisibile o la finestra di dialogo si ferma, deregistrare il listener del sensore. Lo si può fare usando il metodo onPause() dell'attività o il metodo onStop() della finestra di dialogo. In caso contrario, il sensore continuerà ad acquisire dati scaricando di conseguenza la batteria.
    • Non bloccare il metodo onSensorChanged()
      Il metodo onSensorChanged() viene spesso chiamato dal sistema per riportare i dati del sensore. La logica inserita in questo metodo dovrebbe essere il più possibile ridotta al minimo. I calcoli complicati eseguiti utilizzando i dati del sensore devono essere spostati al di fuori di questo metodo.
    • Testare sempre sui dispositivi reali le applicazioni con sensori
      Tutti i sensori descritti in questa sezione sono sensori hardware. L'emulatore di Android potrebbe non essere in grado di simulare le funzioni e le prestazioni di un particolare sensore.

    GPS e localizzazione


    GPS (Global Positioning System) è un sistema satellitare che fornisce informazioni accurate di geolocalizzazione in tutto il mondo. Il GPS è disponibile in molti telefoni Android e tablet. Sotto molti aspetti il GPS si comporta come un sensore di posizione. È in grado di fornire dati di posizione accurati per le applicazioni in esecuzione sul dispositivo. Nella piattaforma Android il GPS non è gestito direttamente dal framework dei sensori. Il servizio di localizzazione Android invece accede ai dati del GPS e li trasferisce a un'applicazione tramite le chiamate del listener della posizione.

    Questa sezione descrive il GPS e i servizi di localizzazione solo dal punto di vista del sensore hardware. Le strategie di localizzazione complete offerte da Android 4.2 e dai telefoni e tablet Android basati sul processore Intel Atom sono un argomento molto più vasto che esula dall'ambito della presente sezione.

    Servizi di localizzazione Android

    L'utilizzo del GPS non è l'unico modo per ottenere informazioni sulla posizione di un dispositivo Android. Il sistema può anche utilizzare la connessione Wi-Fi*, le reti cellulari o altre reti wireless per ottenere la posizione corrente del dispositivo. Il GPS e le reti wireless (tra cui le reti Wi-Fi e le reti cellulari) agiscono come “fornitori della posizione” per i servizi di localizzazione Android. La Tabella 9 elenca le principali classi e interfacce utilizzate per accedere ai servizi di localizzazione Android.

    Tabella 9: Il servizio di localizzazione della piattaforma Android
    NomeTipoDescrizione
    LocationManagerClasseUtilizzata per accedere ai servizi di localizzazione. Fornisce i vari metodi per la richiesta di aggiornamenti periodici della posizione di un'applicazione o per l'invio di avvisi di prossimità
    LocationProviderClasse astrattaLa super classe astratta per i provider di localizzazione
    LocationClasseUtilizzata dai provider di localizzazione per incapsulare dati geografici
    LocationListenerInterfacciaUtilizzata per ricevere le notifiche di localizzazione dal LocationManager

    Ottenere aggiornamenti sulla posizione GPS

    Per ricevere gli aggiornamenti sulla posizione GPS l'applicazione implementa diversi metodi di callback definiti nell'interfaccia LocationListener, in modo simile al meccanismo che utilizza il framework dei sensori per accedere ai dati del sensore. LocationManager invia all'applicazione le notifiche degli aggiornamenti GPS attraverso questi callback (secondo la regola “Non chiamateci, vi chiameremo noi”).

    Per accedere ai dati di posizione GPS nell'applicazione è necessario richiedere il permesso di accesso alla posizione precisa nel file manifesto di Android (esempio di codice 10).

    <manifest …>
    …
        <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
    …  
    </manifest>

    Esempio di codice 10.Richiesta del permesso di accesso alla posizione precisa nel file manifesto**

    L'esempio di codice 11 mostra come ottenere gli aggiornamenti GPS e visualizzare le coordinate della latitudine e della longitudine come testo di una finestra di dialogo.

    package com.intel.deviceinfo;
    
    import android.app.Dialog;
    import android.content.Context;
    import android.location.Location;
    import android.location.LocationListener;
    import android.location.LocationManager;
    import android.os.Bundle;
    import android.widget.TextView;
    
    public class GpsDialog extends Dialog implements LocationListener {
        TextView mDataTxt;
        private LocationManager mLocationManager;
    	
        public GpsDialog(Context context) {
            super(context);
            mLocationManager = (LocationManager)context.getSystemService(Context.LOCATION_SERVICE);
        }
    
        @Override
        protected void onCreate(Bundle savedInstanceState) {
            super.onCreate(savedInstanceState);
    	       mDataTxt = (TextView) findViewById(R.id.sensorDataTxt);
              mDataTxt.setText("...");
    		
            setTitle("Gps Data");
        }
    	
        @Override
        protected void onStart() {
            super.onStart();
            mLocationManager.requestLocationUpdates(
                LocationManager.GPS_PROVIDER, 0, 0, this);
        }
    		
        @Override
        protected void onStop() {
            super.onStop();
            mLocationManager.removeUpdates(this);
        }
    
        @Override
        public void onStatusChanged(String provider, int status, 
            Bundle extras) {
        }
    
        @Override
        public void onProviderEnabled(String provider) {
        }
    
        @Override
        public void onProviderDisabled(String provider) {
        }
    
        @Override
        public void onLocationChanged(Location location) {
            StringBuilder dataStrBuilder = new StringBuilder();
            dataStrBuilder.append(String.format("Latitude: %.3f,   Logitude%.3fn", location.getLatitude(), location.getLongitude()));
            mDataTxt.setText(dataStrBuilder.toString());
    		
        }
    }

    Esempio di codice 11.  Una finestra di dialogo che visualizza i dati di posizione GPS**

    Linee guida per le prestazioni e l'ottimizzazione del GPS e della posizione

    Il GPS fornisce le informazioni più accurate sulla posizione del dispositivo. Trattandosi tuttavia di una funzionalità hardware, consuma energia supplementare. Il GPS impiega anche del tempo per ottenere la prima posizione fissa. Ecco alcune linee guida da seguire durante lo sviluppo di applicazioni GPS in grado di rilevare la posizione:

    • Considerare tutti i provider di localizzazione disponibili
      Oltre a GPS_PROVIDER, vi è NETWORK_PROVIDER. Se le applicazioni hanno bisogno solo di dati di localizzazione geografica non puntuali, si può prendere in considerazione di utilizzare NETWORK_PROVIDER.
    • Utilizzare le posizioni memorizzate nella cache
      Il GPS impiega del tempo per ottenere la prima posizione fissa. Quando l'applicazione è in attesa che il GPS ottenga un aggiornamento preciso della posizione, è possibile usare le posizioni fornite dal metodo getlastKnownLocation() di LocationManager per svolgere parte del lavoro.
    • Ridurre al minimo la frequenza e la durata delle richieste di aggiornamento della posizione
      Richiedere l'aggiornamento della posizione solo quando è necessario e prontamente deregistrare la richiesta dal location manager una volta che non c'è più bisogno di ottenere gli aggiornamenti della posizione.

    Riepilogo


    La piattaforma Android fornisce agli sviluppatori le API per accedere ai sensori integrati nei dispositivi. Questi sensori sono in grado di fornire dati grezzi sul movimento, sulla posizione e sulle condizioni ambientali correnti del dispositivo con elevata precisione e accuratezza. Nello sviluppo di applicazioni per i sensori, è necessario seguire le pratiche ottimali per migliorare le prestazioni e l'efficienza energetica.

    Informazioni sull'autore

    Miao Wei è un ingegnere informatico del gruppo software e servizi presso Intel. Attualmente sta lavorando a progetti di abilitazione della scalabilità del processore Intel® Atom™.



     

     

     

    Copyright © 2013 Intel Corporation. Tutti i diritti riservati.
    *Altri marchi e denominazioni potrebbero essere proprietà di terzi.

    **Questo codice sorgente di esempio è rilasciato in base al Contratto di licenza per il codice sorgente di esempio Intel

  • Developers
  • Android*
  • Android*
  • Intel® Atom™ Processors
  • Sensors
  • Phone
  • Tablet
  • URL
  • Разработка приложений, использующих датчики, для телефонов и планшетных ПК на базе процессоров Intel® Atom™ и под управлением ОС Android*

    $
    0
    0

    Разработка приложений, использующих датчики, для телефонов и планшетных ПК на базе процессоров Intel® Atom™ и под управлением ОС Android*


    Это руководство предназначено для разработчиков приложений и в нем содержится обзор инфраструктуры датчиков, поддерживаемых ОС Android, а также обсуждается использование некоторых датчиков, которые, как правило, присутствуют в телефонах и планшетных ПК на базе процессоров Intel® Atom™. Здесь обсуждаются датчики движения, положения и окружающей среды. Хотя компоненты GPS не считаются датчиками в инфраструктуре Android, сервис определения местоположения на основе GPS также обсуждается в этом руководстве. Все присутствующие в данном руководстве темы относятся к ОС Android 4.2 Jelly Bean.

    Датчики на телефонах и планшетных ПК на базе процессоров Intel® Atom™


    В телефонах и планшетных ПК на базе процессоров Intel Atom под управлением ОС Android могут использоваться различные аппаратные датчики. Все они применяются для обнаружения движений, местоположения и сбора данных, характеризующих окружающую среду. На рисунке 1 представлена схема возможной конфигурации датчиков на обычном устройстве на базе процессора Intel Atom под управлением ОС Android.


    Рисунок 1.  Датчики в системе Android на базе процессора Intel® Atom™

    С учетом сообщаемых данных датчики Android могут разделяться на классы и типы, представленные в таблице 1.

    Датчики движенияАкселерометр
    (TYPE_ACCELEROMETER)
    Измеряет ускорение устройства в м/с2Обнаружение движения
    Гироскоп
    (TYPE_GYROSCOPE)
    Измеряет скорости вращения устройстваОбнаружение вращения
    Датчики определения положенияМагнитометр
    (TYPE_MAGNETIC_FIELD)
    Измеряет силу геомагнитного поля Земли в мкТлКомпас
    Приближение
    (TYPE_PROXIMITY)
    Измеряет близость объекта в см.Обнаружение ближайшего объекта
    GPS
    (не тип android.hardware.Sensor)
    Получает данные о точном географическом местоположении устройстваОбнаружение точного географического местоположения
    Датчики окружающей среды ALS
    (TYPE_LIGHT)
    Измеряет уровень освещенности в люксахАвтоматическое управление яркостью экрана
    БарометрИзмеряет давление окружающего воздуха в миллибарахОпределение высоты

    Таблица 1.  Типы датчиков, поддерживаемые платформой Android
     

    Инфраструктура датчиков Android


    Инфраструктура датчиков Android предлагает механизмы доступа к датчикам и их данным, за исключением GPS, доступ к которому осуществляется с помощью сервисов определения местоположения ОС Android. Они также будут обсуждаться в этой статье. Инфраструктура датчиков является частью пакета android.hardware. В таблице 2 перечислены классы и интерфейсы инфраструктуры датчиков.

    ИмяТипОписание
    SensorManagerКлассИспользуется для создания экземпляра сервиса датчика. Предоставляет различные методы доступа к датчикам, возможности регистрации и отмены регистрации приемников событий датчиков и т.д.
    ДатчикКлассИспользуется для создания экземпляра конкретного датчика.
    SensorEventКлассИспользуются системой для публикации данных датчика. Сюда относятся исходные значения данных датчика, тип датчика, точность данных и штамп времени.
    SensorEventListenerИнтерфейсПредоставляет методы вызова для получения оповещений от класса SensorManager после изменения данных или точности показаний датчика.

    Таблица 2. Инфраструктура датчиков на платформе Android

    Получение конфигурации датчиков

    Производители устройств индивидуально принимают решения о том, какие датчики будут доступны на устройстве. Вы должны выяснить, какие датчики доступны во время выполнения, вызывая для этого метод getSensorList() инфраструктуры датчиков SensorManager с параметром “Sensor.TYPE_ALL”. В примере кода 1 представлен список доступных датчиков, поставщик, энергопотребление и информация о точности каждого датчика.

    package com.intel.deviceinfo;
    	
    import java.util.ArrayList;
    import java.util.HashMap;
    import java.util.List;
    import java.util.Map;
    
    import android.app.Fragment;
    import android.content.Context;
    import android.hardware.Sensor;
    import android.hardware.SensorManager;
    import android.os.Bundle;
    import android.view.LayoutInflater;
    import android.view.View;
    import android.view.ViewGroup;
    import android.widget.AdapterView;
    import android.widget.AdapterView.OnItemClickListener;
    import android.widget.ListView;
    import android.widget.SimpleAdapter;
    	
    public class SensorInfoFragment extends Fragment {
    	
        private View mContentView;
    	
        private ListView mSensorInfoList;	
        SimpleAdapter mSensorInfoListAdapter;
    	
        private List<sensor> mSensorList;
    
        private SensorManager mSensorManager;
    	
        @Override
        public void onActivityCreated(Bundle savedInstanceState) {
            super.onActivityCreated(savedInstanceState);
        }
    	
        @Override
        public void onPause() 
        { 
            super.onPause();
        }
    	
        @Override
        public void onResume() 
        {
            super.onResume();
        }
    	
        @Override
        public View onCreateView(LayoutInflater inflater, ViewGroup container,
                Bundle savedInstanceState) {
            mContentView = inflater.inflate(R.layout.content_sensorinfo_main, null);
            mContentView.setDrawingCacheEnabled(false);
    	
            mSensorManager = (SensorManager)getActivity().getSystemService(Context.SENSOR_SERVICE);
    	
            mSensorInfoList = (ListView)mContentView.findViewById(R.id.listSensorInfo);
    		
            mSensorInfoList.setOnItemClickListener( new OnItemClickListener() {
    			
                @Override
                public void onItemClick(AdapterView<?> arg0, View view, int index, long arg3) {
    				
                    // with the index, figure out what sensor was pressed
                    Sensor sensor = mSensorList.get(index);
    				
                    // pass the sensor to the dialog.
                    SensorDialog dialog = new SensorDialog(getActivity(), sensor);
    
                    dialog.setContentView(R.layout.sensor_display);
                    dialog.setTitle("Sensor Data");
                    dialog.show();
                }
            });
    		
            return mContentView;
        }
    	
        void updateContent(int category, int position) {
            mSensorInfoListAdapter = new SimpleAdapter(getActivity(), 
    	    getData() , android.R.layout.simple_list_item_2,
    	    new String[] {
    	        "NAME",
    	        "VALUE"
    	    },
    	    new int[] { android.R.id.text1, android.R.id.text2 });
    	mSensorInfoList.setAdapter(mSensorInfoListAdapter);
        }
    	
    	
        protected void addItem(List<Map<String, String>> data, String name, String value)   {
            Map<String, String> temp = new HashMap<String, String>();
            temp.put("NAME", name);
            temp.put("VALUE", value);
            data.add(temp);
        }
    	
    	
        private List<? extends Map<String, ?>> getData() {
            List<Map<String, String>> myData = new ArrayList<Map<String, String>>();
            mSensorList = mSensorManager.getSensorList(Sensor.TYPE_ALL);
    		
            for (Sensor sensor : mSensorList ) {
                addItem(myData, sensor.getName(),  "Vendor: " + sensor.getVendor() + ", min. delay: " + sensor.getMinDelay() +", power while in use: " + sensor.getPower() + "mA, maximum range: " + sensor.getMaximumRange() + ", resolution: " + sensor.getResolution());
            }
            return myData;
        }
    }

    Пример кода 1. Фрагмент, который представляет список датчиков**

    Система координат датчика

    Инфраструктура датчиков представляет данные датчика, используя стандартную 3-осевую систему координат, где X, Y и Z соответственно представлены значениями values[0], values[1] и values[2] в объекте SensorEvent.

    Некоторые датчики, такие как датчики света, температуры, приближения и давления, возвращают только одиночные значения. Для этих датчиков используются только значения объекта SensorEvent – values[0].

    Другие датчики представляют данные в стандартной 3-осевой системе координат. Далее приведен список таких датчиков:

    • Акселерометр
    • Датчик гравитации
    • Гироскоп
    • Датчик геомагнитного поля

    3-осевая система координат датчика выбирается относительно экрана устройства в его обычной (по умолчанию) ориентации. Для телефона ориентация по умолчанию – портретная; для планшетного ПК – альбомная. Когда устройство удерживается в своем обычном положении, ось х направлена по горизонтали и указывает вправо, ось у направлена вертикально вверх, а ось z указывает за пределы экрана (навстречу смотрящему). На рисунке 2 показана система координат датчика для телефона, а на рисунке 3 – для планшетного ПК


    Рисунок 2. Система координат датчика для телефона


    Рисунок 3.  Система координат датчика для планшетного ПК

    Наиболее важным моментом для системы координат датчика является тот факт, что эта система никогда не меняется, когда устройство перемещается или меняется его ориентация.

    Мониторинг событий датчиков

    Инфраструктура датчиков представляет данные датчика вместе с объектами SensorEvent. Класс может управлять данными конкретного датчика с помощью интерфейса SensorEventListener и регистрации SensorManager для данного датчика. Инфраструктура датчиков информирует класс об изменениях состояния значений датчика с помощью двух следующих методов вызова SensorEventListener, используемых классом:

     

    onAccuracyChanged()

     

    и

     

    onSensorChanged()

     

    В примере кода 2 представлен SensorDialog, используемый в примере SensorInfoFragment, который обсуждался в разделе "Получение конфигурации датчиков".

    package com.intel.deviceinfo;
    
    import android.app.Dialog;
    import android.content.Context;
    import android.hardware.Sensor;
    import android.hardware.SensorEvent;
    import android.hardware.SensorEventListener;
    import android.hardware.SensorManager;
    import android.os.Bundle;
    import android.widget.TextView;
    
    public class SensorDialog extends Dialog implements SensorEventListener {
        Sensor mSensor;
        TextView mDataTxt;
        private SensorManager mSensorManager;
    
        public SensorDialog(Context ctx, Sensor sensor) {
            this(ctx);
            mSensor = sensor;
        }
    	
        @Override
        protected void onCreate(Bundle savedInstanceState) {
            super.onCreate(savedInstanceState);
            mDataTxt = (TextView) findViewById(R.id.sensorDataTxt);
            mDataTxt.setText("...");
            setTitle(mSensor.getName());
        }
    	
        @Override
        protected void onStart() {
            super.onStart();
            mSensorManager.registerListener(this, mSensor,  SensorManager.SENSOR_DELAY_FASTEST);
        }
    		
        @Override
        protected void onStop() {
            super.onStop();
            mSensorManager.unregisterListener(this, mSensor);
        }
    
        @Override
        public void onAccuracyChanged(Sensor sensor, int accuracy) {
        }
    
        @Override
        public void onSensorChanged(SensorEvent event) {
            if (event.sensor.getType() != mSensor.getType()) {
                return;
            }
            StringBuilder dataStrBuilder = new StringBuilder();
            if ((event.sensor.getType() == Sensor.TYPE_LIGHT)||
                (event.sensor.getType() == Sensor.TYPE_TEMPERATURE)||
                (event.sensor.getType() == Sensor.TYPE_PRESSURE)) {
                dataStrBuilder.append(String.format("Data: %.3fn", event.values[0]));
            }
            else{         
                dataStrBuilder.append( 
                    String.format("Data: %.3f, %.3f, %.3fn", 
                    event.values[0], event.values[1], event.values[2] ));
            }
            mDataTxt.setText(dataStrBuilder.toString());
        }
    }

    Пример кода 2.Диалог, в котором представлены значения датчика**

    Датчики движения

    Датчики движения используются для мониторинга движений устройства, таких как тряска, поворот, качание или наклон. Акселерометр и гироскоп представляют собой два датчика движения, которые доступны на многих планшетных ПК и телефонах.

    Датчики движения представляют данные, используя систему координат, где три значения в объекте SensorEvent это значения values[0], values[1] b values[2], представляющие данные по соответствующим им осям координат x, y и z.

    Чтобы получить представление о датчиках движения и использовать их данные в приложении, необходимо применять некоторые физические формулы, связанные с силой, массой, ускорением на основании законов Ньютона, и отношениями между некоторыми из этих данных во времени. Чтобы получить информацию об этих физических формулах, обратитесь к вашим любимым учебникам по физике или популярным сайтам в Интернете.

    Акселерометр

    Акселерометр измеряет ускорение, которое прилагается к устройству. Его свойства приведены в таблице 3.

     
    ДатчикТип SensorEvent
    Данные (м/с2)
    Описание
    АкселерометрTYPE_ACCELEROMETER values[0]
    values[1]
     values[2]
    Ускорение по оси x
    Ускорение по оси y
    Ускорение по оси z

    Таблица 3. Акселерометр

    Концепция акселерометра основана на действии второго закона Ньютона:

    a = F/m

    Ускорением объекта является результат приложения к нему внешней силы. В число внешних сил входит и сила тяжести, которая прилагается ко всем объектам на Земле. Она пропорциональна силе F, приложенной к объекту и обратно пропорциональна массе m объекта.

    В нашем коде вместо прямого использования приведенного выше уравнения мы рассматриваем результат ускорения в течение периода времени относительно его скорости и местоположения. Следующее уравнение описывает связь скорости объекта v1 с его первоначальной скоростью v0, ускорением aи временем t:

    v1 = v0 + at

    Для определения смещения объекта sмы используем следующее уравнение:

    s = v0t + (1/2)at2

    Во многих случаях мы начинаем с условием v0, равным 0 (перед тем, как устройство начинает двигаться), что упрощает уравнение до:

    s = at2/2

    Из-за действующей силы тяжести ускорение свободного падения, представленное символом g, применяется ко всем объектам на Земле. Не завися от массы объекта, gзависит только от широты местоположения объекта со значением в диапазоне от 9,78 до 9,82 (м/с2). Мы принимаем традиционное стандартное значение, используемое для g:

    g = 9.80665 (m/s2)

    Поскольку акселерометр представляет значения с использованием многомерной системы координат устройства, в нашем коде мы можем рассчитать расстояние по осям x, y и z, используя следующие уравнения:

    Sx = AxT2/2
    Sy=AyT2/2
    Sz=AzT2/2

    Где Sx, Syи Szявляются смещениями по осям х, у и z соответственно, а Ax, Ayи Azявляются ускорениями по осям х, у и z, соответственно. T– это время периода измерения

    public class SensorDialog extends Dialog implements SensorEventListener {
        …	
        private Sensor mSensor;
        private SensorManager mSensorManager;
    	
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mSensor = mSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
        …
    }

    Пример кода 3. Использование акселерометра**

    Иногда мы не используем все три значения данных. В других случаях, возможно, потребуется принять во внимание ориентацию устройства. Например, для приложения Лабиринт мы используем только данные силы тяжести для оси х и оси y при вычислении направлений и расстояний движения шара на основании данных ориентации устройства. Следующий фрагмент кода (пример кода 4) представляет логику.

    @Override
    public void onSensorChanged(SensorEvent event) {
        if (event.sensor.getType() != Sensor.TYPE_ACCELEROMETER) {
            return;
        } 
    float accelX, accelY;
    …
    //detect the current rotation currentRotation from its “natural orientation”
    //using the WindowManager
        switch (currentRotation) {
            case Surface.ROTATION_0:
                accelX = event.values[0];
                accelY = event.values[1];
                break;
            case Surface.ROTATION_90:
                accelX = -event.values[0];
                accelY = event.values[1];
                break;
            case Surface.ROTATION_180:
                accelX = -event.values[0];
                accelY = -event.values[1];
                break;
            case Surface.ROTATION_270:
                accelX = event.values[0];
                accelY = -event.values[1];
                break;
        }
        //calculate the ball’s moving distances along x, and y using accelX, accelY and the time delta
            …
        }
    }

    Пример кода 4.Определение ориентации устройства с использованием данных акселерометра в игре Лабиринт**

    Гироскоп


    Гироскоп измеряет скорость вращения устройства вокруг осей x, y и z, как это показано в таблице 4. Значения данных гироскопа могут быть положительными или отрицательными. Исторически принято, что вращение вокруг оси против часовой стрелки считается положительным, а вращение вокруг оси по часовой стрелке – отрицательным. Мы также можем определить направление значения гироскопа, используя "правило правой руки", показанное на рисунке 4.


    Рисунок 4.  Использование правила "правой руки"для определения положительного направления вращения

    ДатчикТипSensorEvent
    Данные (рад/с)
    Описание
    ГироскопTYPE_GYROSCOPE values[0]
     values[1]
     values[2]
    Скорость вращения вокруг оси x
    Скорость вращения вокруг оси y
    Скорость вращения вокруг оси z

    Таблица 4. Гироскоп

    В примере кода 5 показано, как конструировать образец гироскопа.

    public class SensorDialog extends Dialog implements SensorEventListener {
        …	
        private Sensor mGyro;
        private SensorManager mSensorManager;
    	
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mGyro = mSensorManager.getDefaultSensor(Sensor.TYPE_GYROSCOPE);
        …
    }

    Пример кода 5. Использование гироскопа**

    Датчики положения

    Многие планшетные ПК Android имеют два датчика положения: магнитометр и датчик приближения. Магнитометр измеряет силу магнитного поля Земли по осям х, у и z, а датчик приближения определяет расстояние от устройства до другого объекта.

    Магнитометр

    Наиболее важным назначением магнитометра (представлено в таблице 5) в системах Android является реализация функций компаса.

    ДатчикТипSensorEvent
     Данные (мкТл)
    Описание
    МагнитометрTYPE_MAGNETIC_FIELD values[0]
     values[1]
     values[2]
    Сила магнитного поля Земли по оси x
    Сила магнитного поля Земли по оси y
    Сила магнитного поля Земли по оси z

    Таблица 5. Магнитометр

    В примере кода 6 показано, как конструировать образец магнитометра.

    public class SensorDialog extends Dialog implements SensorEventListener {
        …	
        private Sensor mMagnetometer;
        private SensorManager mSensorManager;
    	
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mMagnetometer = mSensorManager.getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD);
        …
    }

    Пример кода 6.Использование магнитометра**

    Приближение

    Датчик приближения измеряет расстояние между устройством и другим объектом. Устройство может использовать его для обнаружения того, насколько близко оно находится к пользователю (см. таблицу 6), определяя тем самым, если пользователь говорит по телефону, и отключая экран на время разговора.

    Таблица 6: Датчик приближения
    ДатчикТипSensorEvent
    Данные
    Описание
    ПриближениеTYPE_PROXIMITY values[0]Расстояние до объекта в см. Некоторые датчики приближения могут лишь сообщать логическое значение для указания, что объект находится достаточно близко.

    В примере кода 7 показано использование датчика приближения.

    public class SensorDialog extends Dialog implements SensorEventListener {
        …	
        private Sensor mProximity;
        private SensorManager mSensorManager;
    	
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mProximity = mSensorManager.getDefaultSensor(Sensor.TYPE_PROXIMITY);
        …
    }

    Пример кода 7.Использование датчика приближения**

    Датчики окружающей среды

    Датчики окружающей среды используются для обнаружения и представления характеристик окружающей устройства среды, таких как свет, температура, давление или влажность. Датчик освещенности (ALS) и датчик давления (барометр) доступны на многих планшетных ПК с ОС Android.

    Датчик освещенности (ALS)

    Датчик освещенности, представленный в таблице 7, используется в системе для определения освещенности окружающей среды и автоматической регулировки яркости экрана.

    Таблица 7: Датчик освещенности
    ДатчикТип SensorEvent
    Данные (люкс)
    Описание
     ALSTYPE_LIGHT values[0]Освещение вокруг устройства

    В примере кода 8 показано использование датчика освещенности.

    …	
        private Sensor mALS;
        private SensorManager mSensorManager;
    
        …	
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mALS = mSensorManager.getDefaultSensor(Sensor.TYPE_LIGHT);
        …

    Пример кода 8.Использование датчика освещенности**

    Барометр

    Приложения могут использовать датчик атмосферного давления (барометр), представленный в таблице 8, для вычисления высоты текущего местоположения устройства.

    Таблица 8: Датчик атмосферного давления
    ДатчикТип SensorEvent
    Данные (люкс)
    Описание
    БарометрTYPE_PRESSURE values[0]Давление окружающего воздуха в миллибарах.

    В примере кода 9 показано использование барометра

    …	
        private Sensor mBarometer;
        private SensorManager mSensorManager;
    
        …	
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mBarometer = mSensorManager.getDefaultSensor(Sensor.TYPE_PRESSURE);
        …

    Пример кода 9.  Использование барометра**

    Принципы настройки производительности и оптимизации датчиков

    Чтобы использовать датчики в приложениях, вы должны выполнять следующие рекомендации:

    • Проверяйте доступность конкретного датчика перед его использованием
      Платформа Android не требует включения или исключения определенного датчика на устройстве. Перед использованием датчика в вашем приложении нужно сначала проверить его действительную доступность.
    • Всегда отменяйте регистрацию приемников датчиков
       Если операция, использующая приемник датчика, станет невидимой или диалог будет остановлен, нужно отменить регистрацию приемника датчика. Это может быть сделано с помощью метода операции OnPause() или в методе диалога OnStop(). Иначе датчик будет продолжать сбор данных и, в результате, разряжать батарею.
    • Не блокируйте метод onSensorChanged()
       Метод onSensorChanged() часто вызывается системой для представления данных датчика. Для этого метода требуется совсем немного логики. Сложные вычисления с данными датчиков должны быть вынесены за пределы этого метода.
    • Всегда тестируйте свои приложения, работающие с датчиками, на реальных устройствах
      Все датчики, описанные в этом разделе, являются аппаратными датчиками. Эмулятор ОС Android может быть не в состоянии имитировать конкретные функции и определить производительность датчика.

    GPS и местоположение


    GPS (Global Positioning System) является системой, основанной на получении спутниковых данных, которая выдает точную информацию о географическом положении по всему миру. Коммуникации GPS доступны на многих телефонах и планшетных ПК с ОС Android. Во многих случаях устройство GPS работает также как датчик положения. Оно может предоставлять точные данные о местоположении для работающих на устройстве приложений. На платформе Android устройство GPS не управляется непосредственно инфраструктурой датчиков. Вместо этого сервис определения местоположения Android взаимодействует и передает данные GPS в приложения через вызовы приемника местоположения.

    В этом разделе рассматриваются только сервисы GPS и определение местоположения, как если бы это был аппаратный датчик. Обычно объем стратегий определения местоположения, используемые в ОС Android 4.2 в аппаратных платформах телефонов и планшетных ПК на базе процессоров Intel Atom, значительно больше, и данная тема и выходит за рамки настоящей статьи.

    Сервисы определения местоположения Android

    Использование GPS не является единственным способом получения информации о местоположении устройства Android. Система также может использовать Wi-Fi*, сотовые или другие беспроводные сети, чтобы получить данные о текущем местоположении устройства. GPS и беспроводные сети (включая Wi-Fi и сотовые сети) выступают в качестве "поставщиков данных определения местоположения"для сервисов Android. В таблице 9 перечислены основные классы и интерфейсы, используемые для доступа к сервисам определения местоположения в ОС Android.

    Таблица 9. Сервисы определения местоположения платформы Android
    ИмяТипОписание
    LocationManagerКлассИспользуется для доступа к сервисам определения местоположения. Предоставляет различные методы для запроса периодических обновлений данных местоположения для приложения или для отправки предупреждений о приближении
    LocationProviderАбстрактный классАбстрактный супер-класс для поставщиков определения местоположения
    МестоположениеКлассИспользуется поставщиками данных местоположения для инкапсуляции географических данных
    LocationListenerИнтерфейсИспользуется для получения оповещений о местоположении из LocationManager

    Получение обновлений данных GPS о местоположении

    По аналогии с механизмом использования инфраструктуры датчиков для получения данных датчиков, приложение использует несколько методов вызовов, определенных в интерфейсе LocationListener для сбора обновлений данных GPS о местоположении. LocationManager отправляет в приложение оповещения об обновлениях данных GPS с помощью этих вызовов (правило – "Не звоните нам, мы сами обратимся к вам").

    Для получения данных GPS о местоположении в приложении нужно запросить разрешение на получение точной информации о местоположении в вашем файле манифеста Android (пример кода 10).

    <manifest …>
    …
        <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
    …  
    </manifest>

    Пример кода 10.Запрос разрешения на получение точной информации о местоположении в файле манифеста**

    В примере кода 11 представлено, как получать обновления данных GPS и отображать координаты широты и долготы в текстовом диалоге.

    package com.intel.deviceinfo;
    
    import android.app.Dialog;
    import android.content.Context;
    import android.location.Location;
    import android.location.LocationListener;
    import android.location.LocationManager;
    import android.os.Bundle;
    import android.widget.TextView;
    
    public class GpsDialog extends Dialog implements LocationListener {
        TextView mDataTxt;
        private LocationManager mLocationManager;
    	
        public GpsDialog(Context context) {
            super(context);
            mLocationManager = (LocationManager)context.getSystemService(Context.LOCATION_SERVICE);
        }
    
        @Override
        protected void onCreate(Bundle savedInstanceState) {
            super.onCreate(savedInstanceState);
    	       mDataTxt = (TextView) findViewById(R.id.sensorDataTxt);
              mDataTxt.setText("...");
    		
            setTitle("Gps Data");
        }
    	
        @Override
        protected void onStart() {
            super.onStart();
            mLocationManager.requestLocationUpdates(
                LocationManager.GPS_PROVIDER, 0, 0, this);
        }
    		
        @Override
        protected void onStop() {
            super.onStop();
            mLocationManager.removeUpdates(this);
        }
    
        @Override
        public void onStatusChanged(String provider, int status, 
            Bundle extras) {
        }
    
        @Override
        public void onProviderEnabled(String provider) {
        }
    
        @Override
        public void onProviderDisabled(String provider) {
        }
    
        @Override
        public void onLocationChanged(Location location) {
            StringBuilder dataStrBuilder = new StringBuilder();
            dataStrBuilder.append(String.format("Latitude: %.3f,   Logitude%.3fn", location.getLatitude(), location.getLongitude()));
            mDataTxt.setText(dataStrBuilder.toString());
    		
        }
    }

    Пример кода 11.  Диалог, отображающий данные GPS о местоположении**

    Принципы настройки и оптимизации производительности сервисов GPS и определения местоположения

    Сервис GPS предлагает наиболее точную информацию о местоположении устройства. С другой стороны, являясь аппаратным компонентом, он потребляет дополнительную энергию. Также требуется время, чтобы сервис GPS мог получить первые данные о местоположении. Далее приведено несколько рекомендаций, которые вы должны учитывать при разработке приложений с сервисами GPS и определения местоположения:

    • Используйте всех доступных поставщиков данных о местоположении
      В дополнение к сервису GPS_PROVIDER есть сервис NETWORK_PROVIDER. Если приложению нужно получить лишь грубые данные о местоположении, вы можете использовать только сервис NETWORK_PROVIDER.
    • Используйте кэшированные местоположения
      Требуется время, чтобы сервис GPS мог получить первые данные о местоположении. Когда ваше приложение ожидает данные от сервиса GPS, чтобы получить их точное обновление, для выполнения части работы вы можете сначала использовать местоположения, предоставляемые методом getlastKnownLocation() в классе LocationManager.
    • Минимизируйте количество и продолжительность запросов для обновления данных о местоположении
      Вы должны запрашивать обновления данных о местоположении только в случае необходимости и оперативно отменять регистрацию менеджера местоположений, как только данные сервисы более не будут нужны.

    Заключение


    Платформа Android предлагает разработчикам прикладные программные интерфейсы для обеспечения доступа к встроенным датчикам устройств. Эти датчики способны предоставлять исходные данные о текущем перемещении, положении устройства, а также о состоянии окружающей среды с высокой степенью точности и достоверности. При разработке приложений, использующих датчики, вы должны следовать признанным рекомендациям для повышения производительности и эффективности энергопотребления.

    Об авторе

        Мао Вэй (Miao Wei) трудится инженером-программистом в подразделении программного обеспечения и услуг корпорации Intel. В настоящее время он работает в проектах по расширению сфер применения процессоров Intel® Atom™    .



     

     

     

    © Корпорация Intel, 2013 г. Все права защищены.
    *Другие наименования и товарные знаки являются собственностью своих законных владельцев.

    **Этот пример исходного кода опубликован на условиях лицензионного соглашения о примерах исходного кода корпорации Intel

  • Developers
  • Android*
  • Android*
  • Intel® Atom™ Processors
  • Sensors
  • Phone
  • Tablet
  • URL
  • Développement d’applications de capteur pour des téléphones et tablettes Android* basés sur un processeur Intel® Atom™

    $
    0
    0

    Développement d’applications de capteur pour des téléphones et tablettes Android* basés sur un processeur Intel® Atom™


    Ce guide fournit aux développeurs d’applications une introduction à la structure de capteur Android et décrit comment utiliser certains des capteurs qui sont généralement disponibles sur les téléphones et tablettes basés sur le processeur Intel® Atom™. Nous décrirons notamment les capteurs de mouvement, de position et environnementaux. Bien que GPS ne soit pas strictement classé comme un capteur dans la structure Android, ce guide décrit également les services de localisation basés sur GPS. Les descriptions de ce guide se basent sur Android 4.2, Jelly Bean.

    Capteurs des téléphones et tablettes Android* basés sur un processeur Intel® Atom™


    Les téléphones et tablettes Android basés sur le processeur Intel Atom peuvent prendre en charge une grande gamme de capteurs matériels. Ces capteurs permettent de détecter le mouvement et les changements de position et de signaler les paramètres ambiants. Le schéma fonctionnel de la Figure 1 montre une configuration de capteur possible sur un appareil Android typique basé sur un processeur Intel Atom.


    Figure 1. Capteurs sur un système Android basé sur Intel® Atom™

    Nous pouvons classer les capteurs Android en différents types et classes, indiqués dans le Tableau 1, en fonction des données qu’ils fournissent.

    Capteurs de mouvementAccéléromètre
    (TYPE_ACCELEROMETER)
    Mesure l’accélération d’un appareil en m/s2Détection de mouvement
    Gyroscope
    (TYPE_GYROSCOPE)
    Mesure la vitesse de rotation d’un appareilDétection de la rotation
    Capteurs de positionMagnétomètre
    (TYPE_MAGNETIC_FIELD)
    Mesure l’intensité du champ géomagnétique terrestre en µTBoussole
    Proximité
    (TYPE_PROXIMITY)
    Mesure la proximité d’un objet en centimètresDétection d’objets proches
    GPS
    (pas un type android.hardware.Sensor)
    Obtient la géolocalisation précise de l’appareilDétection précise de la géolocalisation
    Capteurs environnementauxALS
    (TYPE_LIGHT)
    Mesure le niveau de lumière ambiante en lxContrôle automatique de la luminosité de l’écran
    BaromètreMesure la pression de l’air ambiant en mbarDétection de l’altitude

    Tableau 1. Types de capteurs pris en charge par la plate-forme Android
     

    Structure de capteurs Android


    La structure de capteurs Android fournit un mécanisme permettant d’accéder aux capteurs et aux données des capteurs, à l’exception du GPS qui est accessible par l’intermédiaire des services de localisation Android. Nous décrirons cela ultérieurement dans cet article. La structure de capteurs fait partie du package android.hardware. Le Tableau 2 répertorie les principales classes et interfaces de la structure de capteurs.

    NomTypeDescription
    SensorManagerClasseUtilisée pour créer une instance du service de capteur. Fournit différentes méthodes permettant d’accéder aux capteurs, d’enregistrer et de désenregistrer les détecteurs d’événements de capteurs, etc.
    SensorClasseUtilisée pour créer une instance de capteur spécifique.
    SensorEventClasseUtilisée par le système pour publier les données de capteur. Elle comprend les valeurs brutes des données de capteur, le type de capteur, la précision des données et un horodatage.
    SensorEventListenerInterfaceFournit des méthodes de rappel permettant de recevoir des notifications de SensorManager lorsque les données du capteur ou la précision du capteur ont changé.

    Tableau 2. La structure de capteurs de la plate-forme Android

    Obtenir la configuration des capteurs

    Les fabricants d’appareils décident quels capteurs sont disponibles sur l’appareil. Vous devez découvrir les capteurs qui sont disponibles lors de l’exécution en invoquant la méthode SensorManager getSensorList() de la structure de capteurs avec le paramètre « Sensor.TYPE_ALL ». L’exemple de code 1 affiche la liste des capteurs disponibles et les informations concernant le fabricant, la puissance et la précision de chaque capteur.

    package com.intel.deviceinfo;
    	
    import java.util.ArrayList;
    import java.util.HashMap;
    import java.util.List;
    import java.util.Map;
    
    import android.app.Fragment;
    import android.content.Context;
    import android.hardware.Sensor;
    import android.hardware.SensorManager;
    import android.os.Bundle;
    import android.view.LayoutInflater;
    import android.view.View;
    import android.view.ViewGroup;
    import android.widget.AdapterView;
    import android.widget.AdapterView.OnItemClickListener;
    import android.widget.ListView;
    import android.widget.SimpleAdapter;
    	
    public class SensorInfoFragment extends Fragment {
    	
        private View mContentView;
    	
        private ListView mSensorInfoList;	
        SimpleAdapter mSensorInfoListAdapter;
    	
        private List<sensor> mSensorList;
    
        private SensorManager mSensorManager;
    	
        @Override
        public void onActivityCreated(Bundle savedInstanceState) {
            super.onActivityCreated(savedInstanceState);
        }
    	
        @Override
        public void onPause() 
        { 
            super.onPause();
        }
    	
        @Override
        public void onResume() 
        {
            super.onResume();
        }
    	
        @Override
        public View onCreateView(LayoutInflater inflater, ViewGroup container,
                Bundle savedInstanceState) {
            mContentView = inflater.inflate(R.layout.content_sensorinfo_main, null);
            mContentView.setDrawingCacheEnabled(false);
    	
            mSensorManager = (SensorManager)getActivity().getSystemService(Context.SENSOR_SERVICE);
    	
            mSensorInfoList = (ListView)mContentView.findViewById(R.id.listSensorInfo);
    		
            mSensorInfoList.setOnItemClickListener( new OnItemClickListener() {
    			
                @Override
                public void onItemClick(AdapterView<?> arg0, View view, int index, long arg3) {
    				
                    // with the index, figure out what sensor was pressed
                    Sensor sensor = mSensorList.get(index);
    				
                    // pass the sensor to the dialog.
                    SensorDialog dialog = new SensorDialog(getActivity(), sensor);
    
                    dialog.setContentView(R.layout.sensor_display);
                    dialog.setTitle("Sensor Data");
                    dialog.show();
                }
            });
    		
            return mContentView;
        }
    	
        void updateContent(int category, int position) {
            mSensorInfoListAdapter = new SimpleAdapter(getActivity(), 
    	    getData() , android.R.layout.simple_list_item_2,
    	    new String[] {
    	        "NAME",
    	        "VALUE"
    	    },
    	    new int[] { android.R.id.text1, android.R.id.text2 });
    	mSensorInfoList.setAdapter(mSensorInfoListAdapter);
        }
    	
    	
        protected void addItem(List<Map<String, String>> data, String name, String value)   {
            Map<String, String> temp = new HashMap<String, String>();
            temp.put("NAME", name);
            temp.put("VALUE", value);
            data.add(temp);
        }
    	
    	
        private List<? extends Map<String, ?>> getData() {
            List<Map<String, String>> myData = new ArrayList<Map<String, String>>();
            mSensorList = mSensorManager.getSensorList(Sensor.TYPE_ALL);
    		
            for (Sensor sensor : mSensorList ) {
                addItem(myData, sensor.getName(),  "Vendor: " + sensor.getVendor() + ", min. delay: " + sensor.getMinDelay() +", power while in use: " + sensor.getPower() + "mA, maximum range: " + sensor.getMaximumRange() + ", resolution: " + sensor.getResolution());
            }
            return myData;
        }
    }

    Exemple de code 1. Un fragment qui affiche la liste des capteurs**

    Système de coordonnées des capteurs

    La structure de capteurs indique les données des capteurs en utilisant un système de coordonnées standard à 3 axes, où X, Y et Z sont représentés par values[0], values[1] et values[2] dans l’objet SensorEvent, respectivement.

    Certains capteurs, comme les capteurs de lumière, de température, de proximité et de pression, retournent uniquement des valeurs uniques. Pour ces capteurs, seule la valeur values[0] de l’objet SensorEvent est utilisée.

    Les autres capteurs indiquent les données en utilisant le système de coordonnées standard à 3 axes des capteurs. Vous trouverez ci-dessous une liste de ces capteurs :

    • Accéléromètre
    • Capteur de gravité
    • Gyroscope
    • Capteur de champ géomagnétique

    Le système de coordonnées à 3 axes des capteurs est défini en fonction de l’orientation naturelle (par défaut) de l’écran de l’appareil. Pour un téléphone, l’orientation par défaut est Portrait, pour une tablette, l’orientation naturelle est Paysage. Lorsqu’un appareil est tenu dans son orientation naturelle, l’axe x est horizontal et pointe vers la droite, l’axe y est vertical et pointe vers le haut, et l’axe z pointe vers l’avant de l’écran. La Figure 2 montre le système de coordonnées des capteurs d’un téléphone et la Figure 3 celui d’une tablette.


    Figure 2. Système de coordonnées de capteur d’un téléphone


    Figure 3. Système de coordonnées de capteur d’une tablette

    Le point le plus important à retenir concernant le système de coordonnées de capteur est que le système de coordonnées du capteur ne change jamais lorsque l’appareil bouge ou que son orientation change.

    Surveillance des événements de capteur

    La structure de capteurs indique les données de capteur à l’aide des objets SensorEvent. Une classe peut surveiller les données d’un capteur spécifique en implémentant l’interface SensorEventListener et en s’enregistrant auprès de SensorManager pour le capteur spécifique. La structure de capteurs informe la classe des changements qui surviennent dans l’état du capteur à l’aide des deux méthodes de rappel SensorEventListener implémentée par la classe :

     

    onAccuracyChanged()

     

    et

     

    onSensorChanged()

     

    L’exemple de code 2 implémente la boîte de dialogue SensorDialog utilisée dans l’exemple SensorInfoFragment décrit dans la section « Obtenir la configuration des capteurs ».

    package com.intel.deviceinfo;
    
    import android.app.Dialog;
    import android.content.Context;
    import android.hardware.Sensor;
    import android.hardware.SensorEvent;
    import android.hardware.SensorEventListener;
    import android.hardware.SensorManager;
    import android.os.Bundle;
    import android.widget.TextView;
    
    public class SensorDialog extends Dialog implements SensorEventListener {
        Sensor mSensor;
        TextView mDataTxt;
        private SensorManager mSensorManager;
    
        public SensorDialog(Context ctx, Sensor sensor) {
            this(ctx);
            mSensor = sensor;
        }
    	
        @Override
        protected void onCreate(Bundle savedInstanceState) {
            super.onCreate(savedInstanceState);
            mDataTxt = (TextView) findViewById(R.id.sensorDataTxt);
            mDataTxt.setText("...");
            setTitle(mSensor.getName());
        }
    	
        @Override
        protected void onStart() {
            super.onStart();
            mSensorManager.registerListener(this, mSensor,  SensorManager.SENSOR_DELAY_FASTEST);
        }
    		
        @Override
        protected void onStop() {
            super.onStop();
            mSensorManager.unregisterListener(this, mSensor);
        }
    
        @Override
        public void onAccuracyChanged(Sensor sensor, int accuracy) {
        }
    
        @Override
        public void onSensorChanged(SensorEvent event) {
            if (event.sensor.getType() != mSensor.getType()) {
                return;
            }
            StringBuilder dataStrBuilder = new StringBuilder();
            if ((event.sensor.getType() == Sensor.TYPE_LIGHT)||
                (event.sensor.getType() == Sensor.TYPE_TEMPERATURE)||
                (event.sensor.getType() == Sensor.TYPE_PRESSURE)) {
                dataStrBuilder.append(String.format("Data: %.3fn", event.values[0]));
            }
            else{         
                dataStrBuilder.append( 
                    String.format("Data: %.3f, %.3f, %.3fn", 
                    event.values[0], event.values[1], event.values[2] ));
            }
            mDataTxt.setText(dataStrBuilder.toString());
        }
    }

    Exemple de code 2.Une boîte de dialogue qui montre les valeurs du capteur**

    Capteurs de mouvement

    Les capteurs de mouvement sont utilisés pour surveiller les mouvements de l’appareil, comme les secousses, rotations, balancements ou inclinaisons. L’accéléromètre et le gyroscope sont des capteurs de mouvement qui sont disponibles sur de nombreux téléphones et tablettes.

    Les capteurs de mouvement indiquent les données en utilisant le système de coordonnées des capteurs, où les trois valeurs de l’objet SensorEvent, values[0], values[1] et values[2], représentent la valeur des axes x, y et z, respectivement.

    Pour comprendre les capteurs de mouvement et appliquer les données dans une application, nous devons appliquer certaines formules de physique liées à la force, la masse, l’accélération, les lois du mouvement de Newton et la relation entre plusieurs de ces entités dans le temps. Pour en savoir plus sur ces formules et ces relations, reportez-vous à vos livres de physique ou sources du domaine public favoris.

    Accéléromètre

    L’accéléromètre mesure l’accélération appliquée à l’appareil et ses propriétés sont résumées dans le Tableau 3.

     
    CapteurTypeSensorEvent
    Données (m/s2)
    Description
    AccéléromètreTYPE_ACCELEROMETERvalues[0]
    values[1]
    values[2]
    Accélération sur l’axe x
    Accélération sur l’axe y
    Accélération sur l’axe z

    Tableau 3. L’accéléromètre

    Le concept de l’accéléromètre est dérivé de la deuxième loi du mouvement de Newton :

    a = F/m

    L’accélération d’un objet est la résultante des forces externes nettes qui lui sont appliquées. Les forces externes comprennent une force appliquée à tous les objets de la terre, la gravité. Elle est proportionnelle à la force nette F appliquée à l’objet et inversement proportionnelle à la masse m de l’objet.

    Dans notre code, au lieu d’utiliser directement l’équation ci-dessus, nous sommes plus intéressés par le résultat de l’accélération pendant une certaine période sur la vitesse et la position de l’appareil. L’équation suivante décrit la relation de la vitesse v1 d’un objet, sa vitesse initiale v0, l’accélération a et la durée t :

    v1 = v0 + at

    Pour calculer le déplacement s de la position de l’objet, nous utilisons l’équation suivante :

    s = v0t + (1/2)at2

    Nous commençons dans de nombreux cas avec une condition v0 égale à 0 (avant que l’appareil commence à bouger), ce qui simplifie l’équation :

    s = at2/2

    Compte tenu de la gravité, l’accélération gravitationnelle, représentée par le symbole g, est appliquée à tous les objets de la terre. Quelle que soit la masse d’un objet, g dépend uniquement de la latitude de l’emplacement de l’objet, avec une plage de valeurs s’étendant de 9,78 à 9,82 (m/s2). Nous adoptons une valeur conventionnelle standard pour g :

    g = 9.80665 (m/s2)

    Comme l’accéléromètre retourne les valeurs en utilisant un système de coordonnées multidimensionnel, nous pouvons calculer dans notre code les distances parcourues sur les axes x, y et z en utilisant les équations suivantes :

    Sx = AxT2/2
    Sy=AyT2/2
    Sz=AzT2/2

    Sx, Sy et Sz correspondent aux déplacements sur les axes x, y et z, respectivement, et Ax, Ay et Az correspondent aux accélérations sur les axes x, y et z, respectivement. T correspond à la durée de la période de mesure.

    public class SensorDialog extends Dialog implements SensorEventListener {
        …	
        private Sensor mSensor;
        private SensorManager mSensorManager;
    	
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mSensor = mSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
        …
    }

    Exemple de code 3. Instanciation d’un accéléromètre**

    Parfois, il arrive que nous n’utilisions pas les trois valeurs de données dimensionnelles. D’autres fois, il arrive que nous devions également prendre en compte l’orientation de l’appareil. Par exemple, pour une application de labyrinthe, nous utilisons uniquement l’accélération gravitationnelle des axes x et y pour calculer les directions et les distances de déplacement de la boule en fonction de l’orientation de l’appareil. Le fragment de code suivant (exemple de code 4) décrit la logique.

    @Override
    public void onSensorChanged(SensorEvent event) {
        if (event.sensor.getType() != Sensor.TYPE_ACCELEROMETER) {
            return;
        } 
    float accelX, accelY;
    …
    //detect the current rotation currentRotation from its “natural orientation”
    //using the WindowManager
        switch (currentRotation) {
            case Surface.ROTATION_0:
                accelX = event.values[0];
                accelY = event.values[1];
                break;
            case Surface.ROTATION_90:
                accelX = -event.values[0];
                accelY = event.values[1];
                break;
            case Surface.ROTATION_180:
                accelX = -event.values[0];
                accelY = -event.values[1];
                break;
            case Surface.ROTATION_270:
                accelX = event.values[0];
                accelY = -event.values[1];
                break;
        }
        //calculate the ball’s moving distances along x, and y using accelX, accelY and the time delta
            …
        }
    }

    Exemple de code 4.Prise en compte de l’orientation de l’appareil lors de l’utilisation des données de l’accéléromètre dans un jeu de labyrinthe**

    Gyroscope


    Le gyroscope (ou simplement gyro) mesure la rotation de l’appareil autour des axes x, y et z, comme illustré dans le Tableau 4. Les données du gyroscope peuvent être des valeurs positives ou négatives. En regardant vers l’origine depuis une position située sur la moitié positive de l’axe, si la rotation tourne dans le sens inverse des aiguilles d’une montre autour de l’axe, la valeur est positive ; si elle tourne dans le sens des aiguilles d’une montre autour de l’axe, la valeur est négative. Nous pouvons également déterminer la direction d’une valeur gyroscopique en utilisant le « sens trigonométrique » illustré dans la Figure 4.


    Figure 4. Utilisation du « sens trigonométrique » pour décider de la direction de rotation positive

    CapteurTypeSensorEvent
    Données (rad/s)
    Description
    GyroscopeTYPE_GYROSCOPEvalues[0]
    values[1]
    values[2]
    Taux de rotation autour de l’axe x
    Taux de rotation autour de l’axe y
    Taux de rotation autour de l’axe z

    Tableau 4. Le gyroscope

    L’exemple de code 5 montre comment instancier un gyroscope.

    public class SensorDialog extends Dialog implements SensorEventListener {
        …	
        private Sensor mGyro;
        private SensorManager mSensorManager;
    	
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mGyro = mSensorManager.getDefaultSensor(Sensor.TYPE_GYROSCOPE);
        …
    }

    Exemple de code 5.Instanciation d’un gyroscope**

    Capteurs de position

    De nombreuses tablettes Android prennent en charge deux capteurs de position : le magnétomètre et les capteurs de proximité. Le magnétomètre mesure l’intensité du champ magnétique terrestre sur les axes x, y et z, alors que le capteur de proximité détecte la distance de l’appareil à un autre objet.

    Magnétomètre

    L’utilisation la plus importante du magnétomètre (décrite dans le Tableau 5) sur les systèmes Android est l’implémentation d’une boussole.

    CapteurTypeSensorEvent
    Données (µT)
    Description
    MagnétomètreTYPE_MAGNETIC_FIELDvalues[0]
    values[1]
    values[2]
    Intensité du champ magnétique terrestre sur l’axe x
    Intensité du champ magnétique terrestre sur l’axe y
    Intensité du champ magnétique terrestre sur l’axe z

    Tableau 5. Le magnétomètre

    L’exemple de code 6 montre comment instancier un magnétomètre.

    public class SensorDialog extends Dialog implements SensorEventListener {
        …	
        private Sensor mMagnetometer;
        private SensorManager mSensorManager;
    	
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mMagnetometer = mSensorManager.getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD);
        …
    }

    Exemple de code 6.Instanciation d’un magnétomètre**

    Proximité

    Le capteur de proximité détecte la distance entre l’appareil et un autre objet. L’appareil peut utiliser cette mesure pour détecter si l’appareil est tenu à proximité de l’utilisateur (voir le Tableau 6), ce qui permet de déterminer si l’utilisateur est au téléphone et de désactiver l’affichage pendant l’appel téléphonique.

    Tableau 6 : Le capteur de proximité
    CapteurTypeSensorEvent
    Données
    Description
    ProximitéTYPE_PROXIMITYvalues[0]Distance d’un objet en cm. Certains capteurs de proximité retournent uniquement une valeur booléenne indiquant si l’objet est suffisamment proche.

    L’exemple de code 7 montre comment instancier un capteur de proximité.

    public class SensorDialog extends Dialog implements SensorEventListener {
        …	
        private Sensor mProximity;
        private SensorManager mSensorManager;
    	
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mProximity = mSensorManager.getDefaultSensor(Sensor.TYPE_PROXIMITY);
        …
    }

    Exemple de code 7.Instanciation d’un capteur de proximité**

    Capteurs environnementaux

    Les capteurs environnementaux détectent et indiquent les paramètres ambiants de l’appareil, comme la lumière, la température, la pression et l’humidité. Le capteur de luminosité ambiante (ALS) et le capteur de pression (baromètre) sont disponibles sur de nombreuses tablettes Android.

    Capteur de luminosité ambiante (ALS)

    Le capteur de luminosité ambiante, décrit dans le Tableau 7, est utilisé par le système pour détecter la luminosité du milieu environnant et ajuster automatiquement la luminosité de l’écran en conséquence.

    Tableau 7 : Le capteur de luminosité ambiante
    CapteurTypeSensorEvent
    Données (lx)
    Description
    ALSTYPE_LIGHTvalues[0]La luminosité autour de l’appareil

    L’exemple de code 8 montre comment instancier un capteur de luminosité ambiante.

    …	
        private Sensor mALS;
        private SensorManager mSensorManager;
    
        …	
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mALS = mSensorManager.getDefaultSensor(Sensor.TYPE_LIGHT);
        …

    Exemple de code 8.Instanciation d’un capteur de luminosité ambiante**

    Baromètre

    Les applications peuvent utiliser le capteur de pression atmosphérique (baromètre), décrit dans le Tableau 8, pour calculer l’altitude de l’emplacement actuel de l’appareil.

    Tableau 8 : Le capteur de pression atmosphérique
    CapteurTypeSensorEvent
    Données (lx)
    Description
    BaromètreTYPE_PRESSUREvalues[0]La pression de l’air ambiant en mbar

    L’exemple de code 9 montre comment instancier un baromètre.

    …	
        private Sensor mBarometer;
        private SensorManager mSensorManager;
    
        …	
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mBarometer = mSensorManager.getDefaultSensor(Sensor.TYPE_PRESSURE);
        …

    Exemple de code 9. Instanciation d’un baromètre**

    Conseils sur les performances et l’optimisation des capteurs

    Pour utiliser des capteurs dans vos applications, vous devez suivre les meilleures pratiques suivantes :

    • Vérifiez toujours la disponibilité d’un capteur spécifique avant de l’utiliser
      La plate-forme Android n’exige pas qu’un capteur spécifique soit inclus ou exclus sur un appareil. Avant d’utiliser un capteur dans votre application, vérifiez toujours s’il est effectivement disponible.
    • Désenregistrez toujours les détecteurs des capteurs
      Si l’activité qui implémente le détecteur du capteur devient invisible ou si la boîte de dialogue s’arrête, désenregistrez le détecteur du capteur. Cela peut être réalisé à l’aide de la méthode onPause() de l’activité ou de la méthode onStop() de la boîte de dialogue. Autrement, le capteur continuera à recueillir des données et à pomper inutilement l’énergie de la batterie.
    • Ne bloquez pas la méthode onSensorChanged()
      La méthode onSensorChanged() est appelée fréquemment par le système pour fournir les données du capteur. Vous devriez insérer une petite logique dans cette méthode dans la mesure du possible. Les calculs complexes effectués avec les données du capteur doivent être réalisés hors de cette méthode.
    • Testez toujours les applications de capteur sur des appareils réels.
      Tous les capteurs décrits dans cette section sont des capteurs matériels. Il est possible que l’émulateur Android ne soit pas en mesure de simuler les fonctions et performances d’un capteur particulier.

    GPS et localisation


    GPS (Global Positioning System) est un système satellitaire qui fournit des informations de géolocalisation précise dans le monde entier. La fonctionnalité GPS est disponible sur de nombreux téléphones et tablettes Android. La fonctionnalité GPS se comporte à de nombreux égards comme un capteur de position. Elle peut fournir des données de localisation précises à des applications exécutées sur l’appareil. Sur la plate-forme Android, la fonctionnalité GPS n’est pas gérée directement par la structure de capteurs. À la place, le service de localisation Android accède aux données GPS et les transfère à une application à l’aide des rappels du détecteur de position.

    Cette section décrit la fonctionnalité GPS et les services de localisation d’un point de vue de capteur matériel uniquement. Les stratégies de localisation complètes offertes par les téléphones et tablettes Android 4.2 basés sur des processeurs Intel Atom constituent un sujet bien plus large qui n’entre pas dans les limites de cette section.

    Services de localisation Android

    L’utilisation du service GPS n’est pas le seul moyen d’obtenir des informations de localisation sur un appareil Android. Le système peut également utiliser Wi-Fi*, les réseaux cellulaires ou d’autres réseaux sans fil pour obtenir la position actuelle de l’appareil. Le système GPS et les réseaux sans fil (y compris Wi-Fi et les réseaux cellulaires) jouent le rôle de « fournisseurs de position » auprès des services de localisation Android. Le Tableau 9 répertorie les principales classes et interfaces utilisées pour accéder aux services de localisation Android.

    Tableau 9 : Le service de localisation de la plate-forme Android
    NomTypeDescription
    LocationManagerClasseUtilisée pour accéder aux services de localisation. Fournit différentes méthodes permettant de demander des mises à jour régulières de la position à l’intention d’une application, ou permettant d’envoyer des alertes de proximité.
    LocationProviderClasse abstraiteLa super classe abstraite des fournisseurs de position
    LocationClasseUtilisée par les fournisseurs de position pour encapsuler les données géographiques
    LocationListenerInterfaceUtilisée pour recevoir des notifications de position du LocationManager

    Obtention des mises à jour de position GPS

    Comme le mécanisme qui utilise la structure de capteurs pour accéder aux données des capteurs, l’application implémente plusieurs méthodes de rappel définies dans l’interface LocationListener pour recevoir des mises à jour de position GPS. Le LocationManager envoie des notifications de mise à jour GPS à l’application par le biais de ces rappels (la règle « Ne nous appelez pas, on vous rappellera »).

    Pour accéder aux données de localisation GPS dans l’application, vous devez demander l’autorisation d’accès aux données de localisation précises dans le fichier manifeste Android (exemple de code 10).

    <manifest …>
    …
        <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
    …  
    </manifest>

    Exemple de code 10.Demander l’autorisation d’accès aux données de positionnement précises dans le fichier manifeste**

    L’exemple de code 11 montre comment obtenir des mises à jour des positions GPS et afficher les coordonnées de latitude et de longitude sous forme de texte dans une boîte de dialogue.

    package com.intel.deviceinfo;
    
    import android.app.Dialog;
    import android.content.Context;
    import android.location.Location;
    import android.location.LocationListener;
    import android.location.LocationManager;
    import android.os.Bundle;
    import android.widget.TextView;
    
    public class GpsDialog extends Dialog implements LocationListener {
        TextView mDataTxt;
        private LocationManager mLocationManager;
    	
        public GpsDialog(Context context) {
            super(context);
            mLocationManager = (LocationManager)context.getSystemService(Context.LOCATION_SERVICE);
        }
    
        @Override
        protected void onCreate(Bundle savedInstanceState) {
            super.onCreate(savedInstanceState);
    	       mDataTxt = (TextView) findViewById(R.id.sensorDataTxt);
              mDataTxt.setText("...");
    		
            setTitle("Gps Data");
        }
    	
        @Override
        protected void onStart() {
            super.onStart();
            mLocationManager.requestLocationUpdates(
                LocationManager.GPS_PROVIDER, 0, 0, this);
        }
    		
        @Override
        protected void onStop() {
            super.onStop();
            mLocationManager.removeUpdates(this);
        }
    
        @Override
        public void onStatusChanged(String provider, int status, 
            Bundle extras) {
        }
    
        @Override
        public void onProviderEnabled(String provider) {
        }
    
        @Override
        public void onProviderDisabled(String provider) {
        }
    
        @Override
        public void onLocationChanged(Location location) {
            StringBuilder dataStrBuilder = new StringBuilder();
            dataStrBuilder.append(String.format("Latitude: %.3f,   Logitude%.3fn", location.getLatitude(), location.getLongitude()));
            mDataTxt.setText(dataStrBuilder.toString());
    		
        }
    }

    Exemple de code 11. Une boîte de dialogue affichant les données de localisation GPS**

    Conseils sur les performances et l’optimisation des fonctionnalités GPS et de localisation

    La fonctionnalité GPS fournit les informations de localisation les plus précises sur l’appareil. Cependant, comme il s’agit d’une fonctionnalité matérielle, elle consomme davantage d’énergie. De plus, l’obtention de la première position par le GPS prend du temps. Voici quelques conseils à suivre lorsque vous développez des applications sensibles aux données GPS et de localisation :

    • Prenez en considération tous les fournisseurs de position disponibles
      En plus de GPS_PROVIDER, il existe aussi NETWORK_PROVIDER. Si vos applications nécessitent uniquement des données de localisation grossières, il peut s’avérer préférable d’utiliser NETWORK_PROVIDER.
    • Utilisez les positions mises en cache
      L’obtention de la première position par le GPS prend du temps. Lorsque votre application attend que le système GPS lui procure une mise à jour de position précise, vous pouvez d’abord utiliser les positions fournies par la méthode getlastKnownLocation() de LocationManager pour commencer le travail.
    • Minimisez la fréquence et la durée des demandes de mise à jour de position
      Vous ne devez demander une mise à jour de position que lorsqu’elle est nécessaire et vous désenregistrer du gestionnaire de position dès que vous n’avez plus besoin des mises à jour de position.

    Résumé


    La plate-forme Android fournit des API permettant aux développeurs d’accéder aux capteurs incorporés à l’appareil. Ces capteurs sont capables de fournir des données brutes sur les mouvements et la position de l’appareil, ainsi que sur les conditions ambiantes, avec un haut degré de précision. Lorsque vous développez des applications utilisant des capteurs, suivez les meilleures pratiques afin d’améliorer les performances et l’efficacité énergétique.

    À propos de l'auteur

    Miao Wei est un ingénieur en applications logicielles d’Intel Software and Services Group. Il travaille actuellement sur les projets d’implémentation du dimensionnement sur le processeur Intel® Atom™.



     

     

     

    Copyright © 2013 Intel Corporation. Tous droits réservés.
    * Les autres noms et désignations peuvent être revendiqués comme marques par des tiers.

    **Cet exemple de code source est divulgué dans le cadre du Contrat de licence sur les exemples de code source Intel (Intel Sample Source Code License Agreement).

  • Developers
  • Android*
  • Android*
  • Intel® Atom™ Processors
  • Sensors
  • Phone
  • Tablet
  • URL
  • Desarrollo de aplicaciones de sensores en teléfonos y tabletas Android* basados en el procesador Intel® Atom™

    $
    0
    0

    Desarrollo de aplicaciones de sensores en teléfonos y tabletas Android* basados en el procesador Intel® Atom™


    Esta guía proporciona a los desarrolladores de aplicaciones una introducción al framework de Android para sensores y describe cómo utilizar algunos de los sensores que generalmente están disponibles en teléfonos y tabletas basados en el procesador Intel® Atom™. Entre los puntos a tratar se encuentran los sensores de movimiento, posición y entorno. Aunque el GPS no se categoriza estrictamente como un sensor en el framework de Android, esta guía también describe los servicios de ubicación basada en GPS. La descripción es esta guía se basa en Android 4.2, Jelly Bean.

    Sensores en teléfonos y tabletas Android basados en el procesador Intel® Atom™


    Los teléfonos y tabletas Android basados en procesadores Intel Atom son compatibles en una gama amplia de sensores de hardware. Estos sensores se utilizan para detectar movimiento y cambios de posición y declarar los parámetros de entorno ambiental. El diagrama de bloques en la Ilustración 1 muestra una posible configuración de sensores en un dispositivo Android típico basado en el procesador Intel Atom.


    Ilustración 1. Sensores en un sistema Android basado en Intel® Atom™

    Basados en los datos que generan, podemos categorizar los sensores Android en las clases y tipos que se muestran en la Tabla 1.

    Sensores de movimientoAcelerómetro
    (TYPE_ACCELEROMETER)
    Mide las aceleraciones de un dispositivo en m/s2Detección de movimiento
    Giroscopio
    (TYPE_GYROSCOPE)
    Mide las velocidades de rotación de un dispositivoDetección de rotación
    Sensores de posiciónMagnetómetro
    (TYPE_MAGNETIC_FIELD)
    Mide la intensidad de los campos geomagnéticos de la tierra en µTBrújula
    Proximidad
    (TYPE_PROXIMITY)
    Mide la proximidad de un objeto en cmDetección de objeto a corta distancia
    GPS
    (no es un tipo de android.hardware.Sensor)
    Obtiene ubicaciones geográficas precisas del dispositivoDetección de ubicaciones geográficas precisas
    Sensores del entornoALS
    (TYPE_LIGHT)
    Mide el nivel de luz ambiental en lxControl automático de brillo en pantalla
    BarómetroMide la presión del aire ambiental en mbarDetección de altitud

    Tabla 1. Tipos de sensores compatibles con la plataforma Android
     

    Framework de Android para sensores


    El framework de Android para sensores proporciona mecanismos para acceder a los sensores y datos de sensores, con la excepción del GPS, al cual se accede mediante los servicios de ubicación para Android. Mas adelante, este documento incluye detalles al respecto. El framework para sensores es parte del paquete android.hardware. La Tabla 2 incluye las clases e interfaces principales del framework para sensores.

    NombreTipoDescripción
    SensorManagerClaseSe usa para crear una instancia del servicio de sensores. Proporciona varios métodos para el acceso a sensores, el registro y la eliminación de registros de las escuchas de eventos de sensores, etc.
    SensorClaseSe usa para crear la instancia de un sensor específico.
    SensorEventClaseEl sistema lo usa para publicar datos del sensor. Incluye los valores de datos de sensores sin procesar, el tipo de sensor, la precisión de los datos y una marca de hora.
    SensorEventListenerInterfazProporciona métodos de llamada de regreso para recibir avisos del SensorManager cuando los datos o la precisión del sensor han cambiado.

    Tabla 2. Framework para sensores de la plataforma Android

    Obtención de la configuración del sensor

    Los fabricantes de dispositivos deciden qué sensores están disponibles en el dispositivo. Debe detectar qué sensores están disponibles en el tiempo de ejecución invocando el método SensorManager getSensorList() del framework para sensores con un parámetro “Sensor.TYPE_ALL”. El Ejemplo de código 1 muestra una lista de los sensores disponibles y la información sobre proveedores, energía y precisión de cada sensor.

    package com.intel.deviceinfo;
    	
    import java.util.ArrayList;
    import java.util.HashMap;
    import java.util.List;
    import java.util.Map;
    
    import android.app.Fragment;
    import android.content.Context;
    import android.hardware.Sensor;
    import android.hardware.SensorManager;
    import android.os.Bundle;
    import android.view.LayoutInflater;
    import android.view.View;
    import android.view.ViewGroup;
    import android.widget.AdapterView;
    import android.widget.AdapterView.OnItemClickListener;
    import android.widget.ListView;
    import android.widget.SimpleAdapter;
    	
    public class SensorInfoFragment extends Fragment {
    	
        private View mContentView;
    	
        private ListView mSensorInfoList;	
        SimpleAdapter mSensorInfoListAdapter;
    	
        private List<Sensor> mSensorList;
    
        private SensorManager mSensorManager;
    	
        @Override
        public void onActivityCreated(Bundle savedInstanceState) {
            super.onActivityCreated(savedInstanceState);
        }
    	
        @Override
        public void onPause() 
        { 
            super.onPause();
        }
    	
        @Override
        public void onResume() 
        {
            super.onResume();
        }
    	
        @Override
        public View onCreateView(LayoutInflater inflater, ViewGroup container,
                Bundle savedInstanceState) {
            mContentView = inflater.inflate(R.layout.content_sensorinfo_main, null);
            mContentView.setDrawingCacheEnabled(false);
    	
            mSensorManager = (SensorManager)getActivity().getSystemService(Context.SENSOR_SERVICE);
    	
            mSensorInfoList = (ListView)mContentView.findViewById(R.id.listSensorInfo);
    		
            mSensorInfoList.setOnItemClickListener( new OnItemClickListener() {
    			
                @Override
                public void onItemClick(AdapterView<?> arg0, View view, int index, long arg3) {
    				
                    // with the index, figure out what sensor was pressed
                    Sensor sensor = mSensorList.get(index);
    				
                    // pass the sensor to the dialog.
                    SensorDialog dialog = new SensorDialog(getActivity(), sensor);
    
                    dialog.setContentView(R.layout.sensor_display);
                    dialog.setTitle("Sensor Data");
                    dialog.show();
                }
            });
    		
            return mContentView;
        }
    	
        void updateContent(int category, int position) {
            mSensorInfoListAdapter = new SimpleAdapter(getActivity(), 
    	    getData() , android.R.layout.simple_list_item_2,
    	    new String[] {
    	        "NAME",
    	        "VALUE"
    	    },
    	    new int[] { android.R.id.text1, android.R.id.text2 });
    	mSensorInfoList.setAdapter(mSensorInfoListAdapter);
        }
    	
    	
        protected void addItem(List<Map<String, String>> data, String name, String value)   {
            Map<String, String> temp = new HashMap<String, String>();
            temp.put("NAME", name);
            temp.put("VALUE", value);
            data.add(temp);
        }
    	
    	
        private List<? extends Map<String, ?>> getData() {
            List<Map<String, String>> myData = new ArrayList<Map<String, String>>();
            mSensorList = mSensorManager.getSensorList(Sensor.TYPE_ALL);
    		
            for (Sensor sensor : mSensorList ) {
                addItem(myData, sensor.getName(),  "Vendor: " + sensor.getVendor() + ", min. delay: " + sensor.getMinDelay() +", power while in use: " + sensor.getPower() + "mA, maximum range: " + sensor.getMaximumRange() + ", resolution: " + sensor.getResolution());
            }
            return myData;
        }
    }

    Ejemplo de código 1. Un fragmento que muestra la lista de sensores**

    Sistema de coordenadas de sensores

    El framework para sensores declara los datos de sensores utilizando un sistema estándar de coordenadas con 3 ejes, donde X, Y y Z son representados por values[0], values[1] y values[2] en el objeto SensorEvent, respectivamente.

    Algunos sensores, tales como la luz, temperatura, proximidad y presión, devuelven solo valores individuales. Para estos sensores solo se utilizan values[0] en el objeto SensorEvent.

    Otros sensores declaran datos en el sistema estándar de coordenadas de sensores con 3 ejes. A continuación se encuentra una lista de dichos sensores:

    • Acelerómetro
    • Sensor de gravedad
    • Giroscopio
    • Sensor del campo geomagnético

    El sistema de coordenadas de sensores con 3 ejes se define en relación a la pantalla del dispositivo en su orientación natural (predeterminada). Para un teléfono la orientación predeterminada es vertical, para una tableta la orientación natural es horizontal. Cuando un dispositivo se sostiene en su orientación natural, el eje x es horizontal y apunta a la derecha, el eje y es vertical y apunta hacia arriba y el eje z apunta fuera de la pantalla (parte frontal). La Ilustración 2 muestra el sistema de coordenadas de sensores para un teléfono y la Ilustración 3 para una tableta.


    Ilustración 2. El sistema de coordenadas de sensores para un teléfono


    Ilustración 3.  El sistema de coordenadas de sensores para una tableta

    Lo más importante sobre el sistema de coordenadas de sensores es que nunca cambia cuando el dispositivo se mueve o cambia de orientación.

    Supervisión de eventos del sensor

    El framework para sensores informa sobre los datos del sensor con objetos SensorEvent. Una clase puede supervisar datos de un sensor específico al implementar la interfaz SensorEventListener y registrar con SensorManager para el sensor específico. El framework para sensores informa a la clase sobre los cambios en los estados del sensor mediante los dos siguientes métodos de llamadas de regreso SensorEventListener que la clase implementa:

     

    onAccuracyChanged()

     

    y

     

    onSensorChanged()

     

    El Ejemplo de código 2 implementa el SensorDialog utilizado en el ejemplo de SensorInfoFragment que describimos en la sección “Obtención de la configuración del sensor”.

    package com.intel.deviceinfo;
    
    import android.app.Dialog;
    import android.content.Context;
    import android.hardware.Sensor;
    import android.hardware.SensorEvent;
    import android.hardware.SensorEventListener;
    import android.hardware.SensorManager;
    import android.os.Bundle;
    import android.widget.TextView;
    
    public class SensorDialog extends Dialog implements SensorEventListener {
        Sensor mSensor;
        TextView mDataTxt;
        private SensorManager mSensorManager;
    
        public SensorDialog(Context ctx, Sensor sensor) {
            this(ctx);
            mSensor = sensor;
        }
    	
        @Override
        protected void onCreate(Bundle savedInstanceState) {
            super.onCreate(savedInstanceState);
            mDataTxt = (TextView) findViewById(R.id.sensorDataTxt);
            mDataTxt.setText("...");
            setTitle(mSensor.getName());
        }
    	
        @Override
        protected void onStart() {
            super.onStart();
            mSensorManager.registerListener(this, mSensor,  SensorManager.SENSOR_DELAY_FASTEST);
        }
    		
        @Override
        protected void onStop() {
            super.onStop();
            mSensorManager.unregisterListener(this, mSensor);
        }
    
        @Override
        public void onAccuracyChanged(Sensor sensor, int accuracy) {
        }
    
        @Override
        public void onSensorChanged(SensorEvent event) {
            if (event.sensor.getType() != mSensor.getType()) {
                return;
            }
            StringBuilder dataStrBuilder = new StringBuilder();
            if ((event.sensor.getType() == Sensor.TYPE_LIGHT)||
                (event.sensor.getType() == Sensor.TYPE_TEMPERATURE)||
                (event.sensor.getType() == Sensor.TYPE_PRESSURE)) {
                dataStrBuilder.append(String.format("Data: %.3fn", event.values[0]));
            }
            else{         
                dataStrBuilder.append( 
                    String.format("Data: %.3f, %.3f, %.3fn", 
                    event.values[0], event.values[1], event.values[2] ));
            }
            mDataTxt.setText(dataStrBuilder.toString());
        }
    }

    Ejemplo de código 2.Un diálogo que muestra los valores de sensor**

    Sensores de movimiento

    Los sensores de movimiento se utilizan para supervisar los movimientos del dispositivo, tales como una sacudida, rotación, oscilación o inclinación. El acelerómetro y giroscopio son dos sensores de movimiento disponibles en muchas tabletas y teléfonos.

    Los sensores de movimiento informan sobre datos mediante el sistema de coordenadas de sensores, donde los tres valores en el objeto SensorEvent, values[0], values[1] y values[2], representan los valores de los ejes x-, y- y z-, respectivamente.

    Para comprender los sensores de movimiento y aplicar los datos en una aplicación, necesitamos aplicar algunas fórmulas físicas relacionadas con fuerza, masa, aceleración, leyes de movimiento de Newton y la relación entre varias de estas entidades en el tiempo. Para familiarizarse más con estas fórmulas y sus relaciones, consulte sus manuales de física favoritos o fuentes de dominio público.

    Acelerómetro

    El acelerómetro mide la aceleración aplicada al dispositivo y sus propiedades se resumen en la Tabla 3.

     
    SensorTipoDatos de
    SensorEvent (m/s2)
    Descripción
    AcelerómetroTYPE_ACCELEROMETERvalues[0]
    values[1]
    values[2]
    Aceleración en el eje x
    Aceleración en el eje y
    Aceleración en el eje z

    Tabla 3. El acelerómetro

    El concepto para el acelerómetro proviene de la segunda ley de movimiento de Newton:

    a = F/m

    La aceleración de un objeto es el resultado de la fuerza neta externa aplicada al objeto. Las fuerzas externas incluyen una que afecta a todos los objetos en la tierra, la gravedad. Es proporcional a la fuerza neta F aplicada al objeto y proporcionalmente inversa a la masa m del objeto.

    En nuestro código, en lugar de usar directamente la ecuación anterior, nos preocupa más el resultado de la aceleración durante un período de tiempo de la velocidad y posición del dispositivo. La siguiente ecuación describe la relación de la velocidad de un objeto v1, su velocidad original v0, la aceleración a y el tiempo t:

    v1 = v0 + at

    Para calcular la posición del objeto con un desplazamiento s, utilizamos la siguiente ecuación:

    s = v0t + (1/2)at2

    En la mayoría de los casos, comenzamos con la condición v0 igual a 0 (antes de que el dispositivo comience a moverse) lo cual simplifica la ecuación a:

    s = at2/2

    Debido a la gravedad, la aceleración gravitacional, representada con el símbolo g, afecta a todos los objetos en la tierra. Sin tener en cuenta la masa del objeto, g solo depende de la latitud de la ubicación del objeto con un valor que oscila entre 9.78 y 9.82 (m/s2). Adoptamos un valor estándar convencional para g:

    g = 9.80665 (m/s2)

    Debido a que el acelerómetro genera los valores usando un sistema de coordenadas para dispositivos multidimensionales, en nuestro código calcularemos las distancias en los ejes x, y, y z con las siguientes ecuaciones:

    Sx = AxT2/2
    Sy=AyT2/2
    Sz=AzT2/2

    Donde Sx, Sy y Sz son los desplazamientos en el eje x, el eje y, y el eje z, respectivamente, y Ax, Ay y Az son las aceleraciones en el eje x, eje y, y eje z, respectivamente. T es el tiempo del período de medida.

    public class SensorDialog extends Dialog implements SensorEventListener {
        …	
        private Sensor mSensor;
        private SensorManager mSensorManager;
    	
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mSensor = mSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
        …
    }

    Ejemplo de código 3. Creación de una instancia de acelerómetro**

    A veces no utilizamos los tres valores de datos de dimensión. Otras veces, puede que también necesitemos tener en cuenta la orientación del dispositivo. Por ejemplo, para una aplicación de laberinto, solamente usamos la aceleración gravitacional del eje x y el eje y para calcular las direcciones de los movimientos de la bola y las distancias según la orientación del dispositivo. El siguiente fragmento de código (Ejemplo de código 4) define la lógica.

    @Override
    public void onSensorChanged(SensorEvent event) {
        if (event.sensor.getType() != Sensor.TYPE_ACCELEROMETER) {
            return;
        } 
    float accelX, accelY;
    …
    //detect the current rotation currentRotation from its “natural orientation”
    //using the WindowManager
        switch (currentRotation) {
            case Surface.ROTATION_0:
                accelX = event.values[0];
                accelY = event.values[1];
                break;
            case Surface.ROTATION_90:
                accelX = -event.values[0];
                accelY = event.values[1];
                break;
            case Surface.ROTATION_180:
                accelX = -event.values[0];
                accelY = -event.values[1];
                break;
            case Surface.ROTATION_270:
                accelX = event.values[0];
                accelY = -event.values[1];
                break;
        }
        //calculate the ball’s moving distances along x, and y using accelX, accelY and the time delta
            …
        }
    }

    Ejemplo de código 4.Consideración de la orientación del dispositivo al usar los datos del acelerómetro en un juego de laberinto**

    Giroscopio


    El giroscopio (en inglés se conoce solo como gyro) mide la velocidad de rotación del dispositivo alrededor de los ejes x , y, y z, tal como se muestra en la Tabla 4. Los valores de los datos del giroscopio pueden ser positivos o negativos. Al observar el origen de una posición a lo largo de la mitad positiva del eje, si la rotación se produce en dirección contraria a las manecillas de un reloj alrededor del eje, el valor es positivo; caso contrario, el valor es negativo. También podemos determinar la dirección de un valor del giroscopio con la "regla de la mano derecha" como se muestra en la Ilustración 4.


    Ilustración 4. Uso de la “regla de la mano derecha” para decidir la dirección de rotación positiva

    SensorTipoDatos de
    SensorEvent (rad/s)
    Descripción
    GiroscopioTYPE_GYROSCOPEvalues[0]
    values[1]
    values[2]
    Velocidad de rotación alrededor del eje x
    Velocidad de rotación alrededor del eje y
    Velocidad de rotación alrededor del eje z

    Tabla 4. El giroscopio

    El Ejemplo de código 5 muestra cómo crear una instancia de giroscopio.

    public class SensorDialog extends Dialog implements SensorEventListener {
        …	
        private Sensor mGyro;
        private SensorManager mSensorManager;
    	
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mGyro = mSensorManager.getDefaultSensor(Sensor.TYPE_GYROSCOPE);
        …
    }

    Ejemplo de código 5. Creación de una instancia de giroscopio**

    Sensores de posición

    Muchas tabletas Android admiten dos sensores de posición: los sensores de magnetómetro y de proximidad. El magnetómetro mide la intensidad del campo magnético de la tierra a lo largo de los ejes x, y, y z, mientras que el sensor de proximidad detecta la distancia del dispositivo con respecto a otro objeto.

    Magnetómetro

    El uso más importante del magnetómetro (descrito en la Tabla 5) en sistemas para Android es implementar la brújula.

    SensorTipoDatos de
    SensorEvent (µT)
    Descripción
    MagnetómetroTYPE_MAGNETIC_FIELDvalues[0]
     values[1]
     values[2]
    Intensidad del campo magnético de la tierra en el eje x
    Intensidad del campo magnético de la tierra en el eje y
    Intensidad del campo magnético de la tierra en el eje z

    Tabla 5. El magnetómetro

    El Ejemplo de código 6 muestra cómo crear una instancia de magnetómetro.

    public class SensorDialog extends Dialog implements SensorEventListener {
        …	
        private Sensor mMagnetometer;
        private SensorManager mSensorManager;
    	
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mMagnetometer = mSensorManager.getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD);
        …
    }

    Ejemplo de código 6. Creación de una instancia de magnetómetro**

    Proximidad

    El sensor de proximidad proporciona la distancia entre el dispositivo y otro objeto. El dispositivo puede utilizarlo para detectar si el usuario sostiene el dispositivo cerca (consulte la Tabla 6); por lo tanto, podrá determinar si el usuario se encuentra en una llamada telefónica y podrá apagar la pantalla durante la llamada telefónica.

    Tabla 6: El sensor de proximidad
    SensorTipoDatos de
    SensorEvent
    Descripción
    ProximidadTYPE_PROXIMITYvalues[0]Distancia de un objeto en cm. Algunos sensores de proximidad solo declaran un valor booleano para indicar si el objeto se encuentra lo suficientemente cerca.

    El Ejemplo de código 7 muestra cómo crear una instancia de sensor de proximidad.

    public class SensorDialog extends Dialog implements SensorEventListener {
        …	
        private Sensor mProximity;
        private SensorManager mSensorManager;
    	
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mProximity = mSensorManager.getDefaultSensor(Sensor.TYPE_PROXIMITY);
        …
    }

    Ejemplo de código 7. Creación de una instancia de sensor de proximidad**

    Sensores ambientales

    Los sensores ambientales detectan e informan de los parámetros ambientales del entorno del dispositivo, tales como la luz, temperatura, presión o humedad. El sensor de luz ambiental (ALS) y el sensor de presión (barómetro) están disponibles en muchas tabletas Android.

    Sensor de luz ambiental (ALS)

    El sistema utiliza el sensor de luz ambiental, descrito en la Tabla 7, para detectar la iluminación del entorno y ajustar automáticamente el brillo de la pantalla al nivel adecuado.

    Tabla 7: El sensor de luz ambiental
    SensorTipoDatos de
    SensorEvent (lx)
    Descripción
    ALSTYPE_LIGHTvalues[0]La iluminación alrededor del dispositivo

    El Ejemplo de código 8 muestra cómo crear una instancia de ALS.

    …	
        private Sensor mALS;
        private SensorManager mSensorManager;
    
        …	
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mALS = mSensorManager.getDefaultSensor(Sensor.TYPE_LIGHT);
        …

    Ejemplo de código 8.Creación de una instancia de sensor de luz ambiental**

    Barómetro

    Las aplicaciones pueden utilizar el sensor de presión atmosférica (barómetro), descrito en la Tabla 8, para calcular la altura de la ubicación actual del dispositivo.

    Tabla 8: El sensor de presión atmosférica
    SensorTipoDatos de
    SensorEvent (lx)
    Descripción
    BarómetroTYPE_PRESSUREvalues[0]La presión del aire ambiental en mbar

    El Ejemplo de código 9 muestra cómo crear una instancia de barómetro.

    …	
        private Sensor mBarometer;
        private SensorManager mSensorManager;
    
        …	
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mBarometer = mSensorManager.getDefaultSensor(Sensor.TYPE_PRESSURE);
        …

    Ejemplo de código 9. Creación de una instancia de barómetro**

    Directrices para el rendimiento y optimización del sensor

    Para utilizar sensores en sus aplicaciones, debe seguir estas prácticas recomendadas:

    • Compruebe siempre la disponibilidad del sensor específico antes de utilizarlo
      La plataforma Android no requiere la inclusión o exclusión de un sensor específico en el dispositivo. Antes de utilizar un sensor en su aplicación, siempre revise primero para ver si está disponible.
    • Elimine siempre los registros de las escuchas del sensor
      Si la actividad que implementa la escucha del sensor se está volviendo invisible o si el diálogo se está deteniendo, elimine el registro de la escucha del sensor. Puede hacerlo mediante el método onPause() de la actividad o el método onStop() del diálogo. Caso contrario, el sensor continuará adquiriendo datos y, como resultado, drenará la batería.
    • No bloquee el método onSensorChanged()
       El sistema llama frecuentemente al método onSensorChanged() para declarar los datos del sensor. Debe poner tan poca lógica en este método como sea posible. Los cálculos complicados con los datos del sensor deben sacarse de este método.
    • Pruebe siempre las aplicaciones de su sensor en dispositivos reales
       Todos los sensores descritos en esta sección son sensores de hardware. El Emulador de Android puede que no sea capaz de simular las funciones y el rendimiento de un sensor particular.

    GPS y reconocimiento de ubicación


    GPS (Sistema de posicionamiento global) es un sistema basado en satélites que proporciona información precisa sobre ubicación geográfica en todo el mundo. GPS está disponible en muchos teléfonos y tabletas Android. En muchos aspectos el GPS se comporta como un sensor de posición. Puede proporcionar datos precisos de la ubicación para aplicaciones que se ejecutan en el dispositivo. En la plataforma Android, el framework para sensores no administra directamente el GPS. En su lugar, el servicio de ubicación de Android accede y transfiere datos del GPS a una aplicación por medio de las llamadas de regreso de la escucha de ubicación.

    Esta sección solo trata el GPS y los servicios de ubicación desde el punto de vista del sensor de hardware. La estrategia de ubicación completa que ofrecen los teléfonos y tabletas Android 4.2 basados en el procesador Intel Atom es un tema mucho más extenso y no será incluido en esta sección.

    Servicios de ubicación Android

    El uso del GPS no es la única manera de obtener información sobre la ubicación en un dispositivo Android. El sistema también puede utilizar Wi-Fi*, redes de celulares u otras redes inalámbricas para obtener la ubicación actual del dispositivo. El GPS y las redes inalámbricas (incluyendo Wi-Fi y redes de celulares) actúan como “proveedores de ubicación” para servicios de ubicación Android. La Tabla 9 incluye las clases e interfaces principales que se utilizan para acceder los servicios de ubicación Android.

    Tabla 9: El servicio de ubicación de la plataforma Android
    NombreTipoDescripción
    LocationManagerClaseSe usa para acceder a los servicios de ubicación. Proporciona varios métodos para solicitar actualizaciones de ubicación periódicas para una aplicación o para enviar alertas de proximidad
    LocationProviderClase abstractaLa súper clase abstracta para proveedores de ubicación
    UbicaciónClaseLos proveedores de ubicación lo utilizan para encapsular los datos geográficos
    LocationListenerInterfazSe usa para recibir avisos de ubicación del LocationManager

    Cómo obtener actualizaciones de ubicación del GPS

    De manera similar al mecanismo de uso del framework para sensores para acceder los datos del sensor, la aplicación implementa varios métodos de llamadas de regreso definidos en la interfaz de LocationListener para recibir actualizaciones de ubicación del GPS. LocationManager envía avisos de actualizaciones del GPS a la aplicación mediante estas llamadas de regreso (la regla de “No nos llame, nosotros le llamaremos”).

    Para acceder los datos de ubicación del GPS en la aplicación, necesita solicitar permiso de acceso a la ubicación precisa en su archivo de manifiesto Android (Ejemplo de código 10).

    <manifest …>
    …
        <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
    …  
    </manifest>

    Ejemplo de código 10.Cómo solicitar permiso de acceso a la ubicación precisa en el archivo de manifiesto**

    El Ejemplo de código 11 muestra cómo obtener actualizaciones para el GPS y mostrar las coordenadas de latitud y longitud en la vista del texto del diálogo.

    package com.intel.deviceinfo;
    
    import android.app.Dialog;
    import android.content.Context;
    import android.location.Location;
    import android.location.LocationListener;
    import android.location.LocationManager;
    import android.os.Bundle;
    import android.widget.TextView;
    
    public class GpsDialog extends Dialog implements LocationListener {
        TextView mDataTxt;
        private LocationManager mLocationManager;
    	
        public GpsDialog(Context context) {
            super(context);
            mLocationManager = (LocationManager)context.getSystemService(Context.LOCATION_SERVICE);
        }
    
        @Override
        protected void onCreate(Bundle savedInstanceState) {
            super.onCreate(savedInstanceState);
    	       mDataTxt = (TextView) findViewById(R.id.sensorDataTxt);
              mDataTxt.setText("...");
    		
            setTitle("Gps Data");
        }
    	
        @Override
        protected void onStart() {
            super.onStart();
            mLocationManager.requestLocationUpdates(
                LocationManager.GPS_PROVIDER, 0, 0, this);
        }
    		
        @Override
        protected void onStop() {
            super.onStop();
            mLocationManager.removeUpdates(this);
        }
    
        @Override
        public void onStatusChanged(String provider, int status, 
            Bundle extras) {
        }
    
        @Override
        public void onProviderEnabled(String provider) {
        }
    
        @Override
        public void onProviderDisabled(String provider) {
        }
    
        @Override
        public void onLocationChanged(Location location) {
            StringBuilder dataStrBuilder = new StringBuilder();
            dataStrBuilder.append(String.format("Latitude: %.3f,   Logitude%.3fn", location.getLatitude(), location.getLongitude()));
            mDataTxt.setText(dataStrBuilder.toString());
    		
        }
    }

    Ejemplo de código 11. Un diálogo que muestra los datos de ubicación del GPS**

    Directrices de rendimiento y optimización del GPS y de la ubicación

    El GPS proporciona la información de ubicación más exacta sobre el dispositivo. Por otro lado, al ser una función del hardware, consume energía adicional. También se requiere tiempo para que el GPS fije la primera ubicación. Estas son algunas directrices que debe seguir cuando desarrolle aplicaciones que reconozcan el GPS y la ubicación:

    • Considere todos los proveedores de ubicación disponibles
       Además de GPS_PROVIDER, existe NETWORK_PROVIDER. Si sus aplicaciones solo necesitan datos de ubicación 'brutos', considere el uso de NETWORK_PROVIDER.
    • Utilice las ubicaciones de la memoria caché
      Se requiere tiempo para que el GPS fije la primera ubicación. Mientras su aplicación espera a que el GPS obtenga una actualización de ubicación exacta, primero puede usar las ubicaciones proporcionadas por el método getlastKnownLocation() de LocationManager para realizar parte del trabajo.
    • Minimice la frecuencia y duración de las solicitudes de actualizaciones de ubicación
      Debe solicitar la actualización de ubicaciones solo cuando sea necesario y eliminar rápidamente el registro desde el administrador de ubicaciones una vez que deje de necesitar actualizaciones de ubicación.

    Resumen


    La plataforma Android proporciona las API para que los desarrolladores accedan los sensores integrados de un dispositivo. Estos sensores tienen la capacidad de proporcionar datos brutos sobre las condiciones del movimiento, posición y entorno ambiental actuales del dispositivo con alta precisión y exactitud. Al desarrollar aplicaciones de sensores, debe seguir las mejores prácticas para mejorar el rendimiento y el consumo eficaz de energía.

    Sobre el autor

    Miao Wei es un ingeniero de software del Grupo de software y servicios Intel. Está trabajando en proyectos de habilitación de la escala del procesador Intel® Atom™.



     

     

     

    Copyright © 2013 Intel Corporation. Todos los derechos reservados.
    *Las demás marcas y nombres podrían ser considerados como propiedad de terceros.

    **Este ejemplo de código fuente se publica según el Contrato de licencia de código fuente de muestra de Intel

  • Developers
  • Android*
  • Android*
  • Intel® Atom™ Processors
  • Sensors
  • Phone
  • Tablet
  • URL
  • Meshcentral.com - Intel(R) Galileo management site update

    $
    0
    0

    A quick note to say that yesterday Matt Primrose updated the Intel Galileo sketch management web site. The site will now enumerate the sketches that are already loaded on the device even if they are not running. Basically the old version would look at running processes only, the new site looks at running processes and the list of files in the "/sketches" folder. This is a nice improvement since you can now upload a bunch sketches and start and stop them at will. Big thanks to Matt for this update.

    If you have a Galileo board, I encourage you to take a look at our Galileo advanced usages document. The new web site was updated on Meshcentral.com/galileo, but also in the file package associated with our advanced usages document.

    Enjoy!
    Ylian
    meshcentral.com

  • Mesh
  • MeshCentral
  • MeshCentral.com
  • Galileo
  • Intel Galileo
  • sketch
  • Ylian
  • IoT
  • Internet-of-things
  • arduino
  • Icon Image: 

  • Debugging
  • Development Tools
  • Open Source
  • Sensors
  • HTML5
  • JavaScript*
  • Cloud Services
  • HTML5
  • Developers
  • Partners
  • Professors
  • Students
  • Yocto Project
  • Introducing 4th Generation Intel® Atom™ Processor, BayTrail, to Android Developers

    $
    0
    0

    Abstract


    Intel has launched the 4th generation Intel Atom processor, code-named “BayTrail”. This latest Atom processor is a multi-core system-on-chip (SoC) that integrates the next generation Intel® processor core, graphics, memory, and I/O interfaces into one solution. It is also Intel’s first SoC which is based on the 22 nm processor technology.This multi-core Atom processor provides outstanding computing power and is more power efficient compared to its predecessors. Besides latest IA core technology, it also provides extensive platform features, such as graphics, connectivity, security, and sensors, which enable developers to create software with unlimited user experiences. This article focuses on BayTrail’s impact to Android, Intel’s enhancement to the Android architecture, and Intel’s solutions for Android developers. 
     

    Table of Contents


    • BayTrail SoC CPU Benefits
    • BayTrail SoC Components Enhancements
    • BayTrail Improvement Over Previous Atom Processors
    • BayTrail Variants for Android – Z36XXX and Z37XXX
    • Intel Optimizations to the Android Software Stack
    • Intel Tools for Atom-Based Android Platforms
    • References

    BayTrail SoC CPU Benefits


    This section provides an overview of the BayTrail CPU capabilities. The new multi-core Intel® Atom™ SoC is powered by the Intel® Silvermont microarchitecture which delivers faster performance with low power requirements.

          Faster Performance
    • Quad core supports 4 cores/4 threads out of order processing and 2 MB of L2 Cache which makes the device run faster and more responsive by allowing multiple apps and services running at the same time.
    • Burst technology 2.0 allows the system to tap extra core when necessary which allows CPU-intensive application to run faster and smoother
    • Performance improved by using the 22-nm processor technology:
      • Maximizes current flow during ON state for better performance
      • Minimizes leaks during OFF state leading to more energy efficiency
    • 64-bit OS capable
          Efficient Power Management
    • Supports dynamic power sharing between the CPU and IP (e.g. graphics) allowing for higher peak frequencies
    • Total SoC energy budget is dynamically assigned according to the application needs
    • Supports fine-grained low power states which provides better power management and leads to longer battery life
    • Supports cache retention during deep sleep states leading to lower idle power and shorter wakeup times
    • Offers more than 10 hours of active battery life

    BayTrail CPU Specs in a Nutshell

    BayTrail SoC Components Enhancements


    In addition to the processor core, Intel has made many improvements to components on the SoC - such as graphics, imaging, audio, display, storage, USB, and security. These components enable developers to create innovative software on IA-based Android devices. The following is the highlight of each component.

    • Display 
      • Supports high-resolution display (up to 2560x1600 @ 60 Hz)
      • Retinal display capable
      • Supports dual display
    • Intel® Wireless Display (WiDi)
      • Supports video up to 1080p/30 with 2 channel stereo
      • Content protection with HDCP2.1 (Widevine DRM)
      • Supports multi-task 
      • Dual-screen apps are enabled
      • WFA Miracast certified
    • Graphics and Media Engine 
      • Based on Intel Gen7 HD graphic processor which provides amazing visuals
      • Supports graphics burst, Open GL ES 3.3, and hardware video codec acceleration of multiple media formats
      • Supports extensive video and display post-processing
      • Stunning graphics with sharp and smooth HD video playback and internet streaming with more than 8-10 hours of battery life
    • Image Signal Processor
      • Supports ISP 2.0
      • Supports up to two cameras with 8 MP
      • Supports various imaging technologies, such as burst mode, continuous capture, low light noise reduction, video stabilization, 3A, and zero shutter lag.
    • USB
      • Supports USB 3.0
    • Audio
      • Low power audio engine
      • Supports multiple audio formats
    • Storage
      • Supports one SDIO 3.0 controller
      • Supports one eMMC 4.51 controller
      • Supports one SDXC controller
    • Security
      • Supports secure boot
      • Intel® Trusted Execution Engine (Intel® TXE)

    SoC Components Specs in a Nutshell

    BayTrail Improvement to Previous Atom Processor 


    Intel announced its first Atom processor for Android phones in 2012 - Z24XX, code-named “Medfield”, it was a single-core processor based on Intel’s 32 nm processor technology. In the spring of 2013, Intel unveiled Medfield’s successor for phones and tablets, Z25XX series, code-named “CloverTrail+”, it was a dual cores processor based on Intel’s 32 nm processor technology. In the fall of 2013, Intel announced its latest Atom processor, Z3XXX BayTrail which is available in both dual and quad core and is based on Intel’s latest 22-nm processor technology. Many improvements have been made to BayTrail. The following table summarizes BayTrail’s improvement compared to its predecessor. 

    BayTrail Enhancement from Previous Generation of SoC

    BayTrail Variants for Android – Z36XXX and Z37XXX


    The following table summarizes BayTrail variants for Android.

    BayTrail SoC Variants

    Intel Optimizations to the Android Software Stack


    Android is Google’s open source Linux-based software stack developed for mobile phones and tablets. Google distributes the official code through the Android Open Source Project (AOSP) to the public. OEMs, who plan to release Android devices, can work with Google and modify the distribution to fit their platform needs. Android software stacks consists of:

    • Linux kernel– contains device drivers and memory, security, power management related software. 
    • Middleware– contains native libraries required for the application development, for example media, SQLite, OpenGL, SSL, Graphics, and WebKit.
    • Android runtime– contains Java core libraries and Dalvik virtual machine for running Java applications.
    • Android framework– contains Java classes or APIs to create Android applications and services.
    • Applications– contains Android applications.

    Android version has evolved from its first release - CupCake, to its recent release - JellyBean (4.2), and to its current release - KitKat (4.4). BayTrail supports both JellyBean and KitKat distributions. Intel has introduced many optimizations to the Android software stack for performance enhancement. Developers can create apps with snappy performance, smooth, and fluid user experiences. 

          Optimizations includes:
    • Improvements that are made to ensure Dalvik apps run well on Intel processors
    • Tools for NDK developers to compile native code (C/C++) for x86
    • Optimizations to new web technologies such as HTML5 and Javascript
    • Performance enhancement to Dalvik VM
    • Optimizations to core libraries and the kernel by contributing to AOSP
    • Device drivers that are validated and optimized for the x86 power and memory footprint

    Intel’s Optimization to Android Software Stacks

    Intel Tools for Atom-Based Android Platforms


    Google provides a suite of tools for developers to build and debug software on Android platforms. Developers are required to install the Android SDK and integrate it with their choice of IDE to build the software. Emulator, debugger, code optimizer, performance optimizer, and test tools are also available from Google. 

    Developer can start developing Android software with the initial tools described in the following list.

    In addition to Google’s Android tools, Intel also provides tools specifically for helping developers speed up their development on Atom-based Android platforms.

    Intel Tools Features Summary

    References


    1. BayTrail Z36XXX and Z37XXX datasheet, http://www.intel.com/content/www/us/en/processors/atom/atom-z36xxx-z37xxx-datasheet-vol-1.html
    2. Intel® Atom™ Processor Z3000 Series for Android* Tablets Brief, http://www.intel.com/content/www/us/en/processors/atom/atom-z3000-android-tablets-brief.html?wapkw=android+atom+processor
    3. Intel IDF 2013 presentations:
      • Building Android* Systems with Intel® Architecture Based Platforms
      • Tablet Solutions in Business: Build on Intel® Technologies for Differentiation
      • Display Technologies for Intel® Graphics
      • Hands-on Lab: Develop, Optimize, Debug, and Tune Applications for Android*
      • Using the Second-Screen API and Intel® Wireless Display from Android* Applications
      • Accelerating Your Software Development for Android* on Intel® Platforms
      • Developing Native Applications on Android and Optimizing for Intel® Architecture
      • Technology Insight: Intel® Platform for Tablets, Code Name Bay Trail-T
      • Technology Insight: Intel Silvermont Microarchitecture
      • Tablets with Android* and Intel® Atom™ Processors

     

    Icon Image: 

  • Product Documentation
  • Product Support
  • Technical Article
  • Development Tools
  • Education
  • Intel® Atom™ Processors
  • Mobility
  • Optimization
  • Security
  • Sensors
  • Android* Development Tools
  • Intel Hardware Accelerated Execution Manager (HAXM)
  • Intel® C++ Compiler
  • Intel® JTAG Debugger
  • Intel® Threading Building Blocks
  • Intel® Graphics Performance Analyzers
  • Android*
  • Phone
  • Tablet
  • Developers
  • Intel AppUp® Developers
  • Partners
  • Professors
  • Students
  • Android*

  • Implementing Multiple Touch Gestures Using Unity* 3D with TouchScript

    $
    0
    0

    By Lynn Thompson

    Downloads

    Implementing Multiple Touch Gestures Using Unity* 3D with TouchScript [PDF 1.48MB]

    This article provides an overview and example for the several TouchScript gestures (Press, Release, Long Press, Tap, Flick, Pan, Rotate, and Scale) available when developing touch-based Unity* 3D simulations and applications running on Ultrabook™ devices with the Windows* 8 operating system. TouchScript is available at no cost from the Unity 3D Asset Store.

    The example used in this article starts with a preconfigured scene imported from Autodesk 3ds Max*. I then add geometry to the Unity 3D scene to construct graphical user interface (GUI) widgets that accept touch input from the user. The multiple gestures available via TouchScript will be implemented and customized such that adjustments to the widget can be made during runtime, allowing for a GUI widget that can provide a touch UI that is acceptable to a wider audience when running a Unity 3D-based application on Window 8.

    Creating the Example

    I first import into Unity 3D an Autodesk 3ds Max FBX* export that contains a few geometry primitives and a small patch of banyan and palm trees (see Figure 1). I add a first-person controller to the scene; then, I assign a box collider to the box primitive imported from Autodesk 3ds Max, which acts as the scene’s floor, to prevent the first-person controller from falling out of the scene.


    Figure 1. Unity* 3D editor with a scene imported from Autodesk 3ds Max*

    Next, I add eight spheres (LeftLittleTouch, LeftRingTouch, LeftMiddleTouch, LeftIndexTouch, RightLittleTouch, RightRingTouch, RightMiddleTouch, and RightIndexTouch) as children of the main camera, which is a child of the first-person controller. I give these spheres a transform scale of x = 0.15 y = 0.30 z = 0.15 and position them in front of the main camera in a manner similar to fingertips on a flat surface. I add a point light above the modified spheres and make a child of the main camera to ensure illumination of the spheres. The layout of these modified spheres is shown in Figure 2.


    Figure 2. Unity* 3D runtime with a first-person controller and modified spheres as children for the touch interface

    This ends the base configuration of the example. From here, I add TouchScript gestures to the modified spheres and configure scripts to generate a desired touch response.

    Adding Press and Release Gestures

    The first-person controller from the initialization step of the example contains the JavaScript* file FPSInput Controller.js and the C# script Mouse Look.cs. The FPSInput Controller.js script takes input from the keyboard; Mouse Look.cs, obviously, takes input from the mouse. I modified these scripts to contain public variables that replace vertical and horizontal inputs into FPSInput Controller.js and to replace mouseX and mouseY inputs into the Mouse Look.cs script.

    This replacement is fairly straightforward in FPSInputController.js because the keyboard sending a 1, −1, or 0 to the script is replaced with a touch event that results in public variables being changed to a 1, −1, or 0. The touch objects, their respective scripts, and the values they send to script FPSInputController are provided in Table 1 and can be viewed in their entirety in the Unity 3D FirstPerson project accompanying this article.

    Table 1. Touch Objects and Corresponding Scripts in FPSInputController.js

    Object or AssetScriptPublic Variable Manipulation
    LeftLittleTouchMoveLeft.cshorizontal = −1 onPress, 0 onRelease
    LeftRingTouchMoveForward.csvertical = 1 onPress, 0 onRelease
    LeftMiddleTouchMoveRight.cshorizontal = 1 onPress, 0 onRelease
    LeftIndexTouchMoveReverse.csvertical = -1 onPress, 0 onRelease

    This method works for controller position because the information is discrete, as are the TouchScript onPress and onRelease functions. For rotation, an angle variable needs to be updated every frame. To accomplish this, I send a Boolean value to a Mouse Look.cs public variable, and the rotation angle is changed in the Mouse Look.cs Update function at a rate of 1 degree per frame accordingly. The touch objects, their respective scripts, and the values they send to the Mouse Look.cs script are provided in Table 2 and can be viewed in their entirety in the Unity 3D FirstPerson project accompanying this article.

    Table 2. Touch Objects and Corresponding Scripts in Mouse Look.cs

    Object or AssetScriptPublic Variable Manipulation
    RightLittleTouchLookDown.cslookDown = true onPress, false onRelease
    RightRingTouchLookRight.cslookRight = true onPress, false onRelease
    RightMiddleTouchLookUp.cslookUp = true onPress, false onRelease
    RightIndexTouchLookLeft.cslookLeft = true onPress, false onRelease

    These scripts allow touch interface for first-person shooter (FPS) position and rotation control, replacing keyboard and mouse input.

    Using the LongPress Gesture

    My original intent for this example was to have the LongPress Gesture make all the touch objects disappear after at least one object had been pressed for a certain amount of time. The touch objects would then all reappear after all touch objects had instigated a release gesture and had not been touched for a certain amount of time. When I tried implementing it this way, however, the behavior was not as I expected, possibly because the LongPress Gesture was used in conjunction with the standard Press and Release Gestures. As a workaround, I implemented this functionality by using the already-implemented Press and Release Gestures in combination with public variables and the delta time method in the system timer.

    When initially setting up the Unity 3D scene, I configured a TopLevelGameObject asset to hold the TouchScript Touch Manager and the TouchScript Windows 7 Touch Input script. To facilitate the desired LongPress Gesture, I added a custom C# script named PublicVariables.cs to the TopLevelGameObject asset. I did this not only to hold public variables but also to perform actions based on the state of these variables.

    To configure this disappear and reappear functionality, I configured each move and look script associated with its respective touch sphere to have access to the public variables in PublicVariables.cs. PublicVariables.cs contains a Boolean variable for the state of each modified sphere’s move or look Press Gesture, being true when the modified sphere is pressed and false when it is released.

    The PublicVariables.cs script uses the state of these variables to configure a single variable used to set the state of each modified sphere’s MeshRenderer. I configure the timer such that if any modified sphere or combination of modified spheres has been pressed for more than 10 seconds, the variable controlling the MeshRenderer state is set to False. If all of the spheres have been released for more than 2 seconds, the MeshRenderer state is set to True. Each move and look script has in its Update function a line of code to enable or disable its respective sphere’s MeshRenderer based on the state of this variable in PublicVariables.cs.

    This code results in all of the modified spheres disappearing when any sphere or combination of spheres has been pressed for more than 10 consecutive seconds. The modified spheres then all reappear if all modified spheres have been released for more than 2 seconds. By enabling and disabling the modified spheres’ MeshRenderer, only the modified sphere’s visibility is affected, and it remains an asset in the scene and is able to process touch gestures. As such, the modified spheres are still used to manipulate the scene’s first-person controller. The user is required to intuitively know where the spheres are positioned and be able to use them while they are not being rendered to the screen. Examine the PublicVariables, Move, and Look scripts in the example provided to see the code in its entirety.

    The Tap Gesture

    To demonstrate the use of multiple gestures with one asset, I add the Tap Gesture to all four move spheres. The Tap Gesture is configured in all four of the left GUI widget’s modified spheres’ respective move scripts. The move scripts are then configured for access to the first-person controller’s Character Motor script. I configure the tap functions in each move script to manipulate the maximum speed variables in the Character Motor’s movement function.

    The MoveForward script attached to the LeftRingTouch modified sphere is configured so that a Tap Gesture increases the maximum forward speed and maximum reverse speed by one. I configure the MoveReverse script attached to the LeftIndexTouch modified sphere for a Tap Gesture to decrease the maximum forward speed and maximum reverse speed by one. I configure the MoveLeft script attached to the LeftLittleTouch modified sphere for a Tap Gesture to increase the maximum sideways speed by one and the MoveRight script attached to the LeftMiddleTouch modified sphere for a Tap gesture to decrease the maximum sideways speed by one. The maximum speed variables are floating-point values and can be adjusted as desired.

    When using the default settings with the Tap Gesture, the speeds change during the period when the user may want to press the modified sphere to instigate movement. In short, Press and Release Gestures are also considered Tap Gestures. To mitigate this behavior, I changed the Time Limit setting in the Will Recognize section of the Tap Gesture (see Figure 3) from Infinity to 0.25. The lower this setting, the sharper the tap action must be to instigate the Tap Gesture.


    Figure 3. Unity* 3D editor showing a modified Time Limit setting in a Tap Gesture

    The modified sphere can be used to navigate the scene and adjust the speed at which the scene is navigated. A quirk of this method for navigating and adjusting speed is that when a Tap Gesture is used to adjust speed, the first-person controller is also moved in the direction associated with the modified sphere that was tapped. For example, tapping the LeftIndexTouch modified sphere to decrement the maximum forward speed and maximum reverse speed slightly moves the first-person controller, and subsequently the scene’s main camera, in reverse. In the accompanying Unity 3D project, I add GUI labels to display the maximum speed setting so that the labels can be visualized when tapping the modified spheres. You can remove this quirk by adding a GUI widget component that, when used, disables the Press and Release Gestures, allowing the user to tap the GUI widget component without moving the main scene’s camera. After the maximum forward speed and maximum reverse speed are set to the user’s preference, the new GUI widget component can be used again to enable the Press and Release Gestures.

    When developing this portion of the example, I intended to add a Flick Gesture in combination with the Tap Gesture. The Tap Gesture was going to increase speed, and the Flick Gesture was intended to decrease speed. However, when adding both the Flick and the Tap Gestures, only the Tap Gesture was recognized. Both worked independently with the Press and Release Gestures, but the Flick Gesture was never recognized when used in conjunction with the Tap Gesture.

    The Flick Gesture

    To demonstrate the Flick Gesture, I add functionality to the modified spheres on the right side of the screen. The look scripts are attached to these spheres and control the rotation of the scene’s main camera, which is a child of the first-person controller. I begin by adding a Flick Gesture to each sphere. I configure the Flick Gestures added to the RightTouchIndex and RightTouchRing modified spheres that control horizontal rotation with their touch direction as horizontal (see Figure 4). I configure the Flick Gestures added to the RightTouchMiddle and RightTouchLittle modified spheres that control vertical rotation with their touch direction as vertical. This may be useful when the modified spheres have disappeared after being pressed for 10 or more seconds and the touch interface does not respond to the user’s flick (as opposed to responding in an undesired manner). The user then knows that the touch interface–modified spheres need to be released, allows 2 seconds for the modified spheres to reappear, and then reengages the touch GUI widget.


    Figure 4. Unity* 3D editor showing a modified Direction setting in a Flick Gesture

    Each look script uses the public variables that exist in the Mouse Look script. When a modified sphere is flicked, the Mouse Look script instigates a rotation in the respective direction, but because there is no flick Release Gesture, the rotation continues indefinitely. To stop the rotation, the user must sharply press and release the modified sphere that was flicked. This action causes an additional degree of rotation from the Press Gesture but is followed by the Release Gesture, which sets the respective rotation public variable to False, stopping the rotation.

    Like the Tap Gesture, the Flick Gesture now works in conjunction with the Press and Release Gestures. Users can still rotate the scene’s main camera by holding down the appropriate modified sphere, releasing it to stop the rotation. With the Flick Gesture implemented, users can also flick the desired modified sphere to instigate a continuous rotation that they can stop by pressing and releasing the same modified sphere.

    The Remaining Gestures

    To this point in the example, all of the gestures implemented enhance the user’s ability to directly navigate the scene. I use the remaining gestures (Rotate, Scale, and Pan) to allow the user to modify the touch targets’ (the modified spheres) layout for improved ergonomics.

    Also, up to this point, all of the gestures are discrete in nature. An immediate action occurs when a Unity 3D asset is tapped, pressed, released, or flicked. This action may be the setting of a variable that results in a continuous action (the flick-instigated rotation), but the actions are discrete in nature. The Rotate, Scale, and Pan Gestures are continuous in nature. These gestures implement a delta method where the difference between the current state of the gesture and that of the previous frame is used in the script to manipulate a Unity 3D screen asset as desired.

    The Rotate Gesture

    I add the Rotate Gesture in the same way as previous gestures. I use the Add Component menu in the Inspector Panel to add the TouchScript gesture, and the script attached to the touch asset receiving the gesture is modified to react to the gesture. When implemented, the Rotate Gesture is instigated by a movement similar to using two fingers to rotate a coin on a flat surface. This action must occur within an area circumscribed by the Unity 3D asset receiving the gesture.

    In this example, rotating the modified spheres results in the capsule shape becoming more of a sphere as the end of the modified sphere is brought into view. This behavior gives the user an alternate touch target interface, if desired. In this example, this functionality is of more use for the modified spheres on the right side of the screen. For the rotate widget on the right side of the screen, the user can flick the appropriate modified sphere for constant rotation up, down, left, or right. I configure the modified spheres controlling vertical rotation with vertical flicks. I configure the modified spheres controlling horizontal rotation with horizontal flicks. The modified spheres controlling horizontal rotation can now be rotated so that the longest dimension is horizontal, allowing for a more intuitive flicking action.

    When rotating the modified spheres that are closest to the center of the screen, the modified spheres take on a more spherical appearance. The farther away from the center screen the modified sphere is when being rotated, the modified sphere takes on a more capsule-like appearance. This is an effect of the modified sphere’s distance from the scene’s main camera. It may be possible to mitigate this affect by adjusting the axes on which the modified sphere rotates. The following line of code does the work of rotating the modified sphere when the Rotate Gesture is active:

    targetRot = Quaternion.AngleAxis(gesture.LocalDeltaRotation, gesture.WorldTransformPlane.normal) * targetRot;

    The second argument in the Quaternion.AngleAxis is the axis on which the modified sphere rotates. This argument is a Vector3 and can be changed as follows:

    targetRot = Quaternion.AngleAxis(gesture.LocalDeltaRotation, new Vector3(1, 0, 0)) * targetRot;

    By adjusting this Vector3 as a function of the modified sphere’s distance from the position relative to the scene’s main camera, I can remove the effect, resulting in the modified sphere’s appearance being more consistent and spherical across all the spheres.

    The Scale Gesture

    I add the Scale Gesture as an additional means of altering the modified sphere’s presentation. When rotated, the resulting circular touch target may not be large enough for the user’s preference. The user can employ the Scale Gesture to modify the size of the touch target.

    The motion used to instigate a Scale Gesture is similar to the Pinch Gesture used on mobile devices. Two fingers are moved apart and brought together for a scale-down gesture. The fingers are together and moved apart to instigate a scale-up gesture. The code in the accompanying Unity 3D project scales the target asset uniformly. This is not required: You can code for scaling on any combination of the x, y, or z axes.

    An additional feature that may help with user utilization of the GUI widgets is automatic scaling following the 10 seconds of constant use, resulting in the disappearance of the GUI widgets. By automatically multiplying a modified sphere’s transform.localscale by 1.1 whenever the modified sphere’s MeshRenderer has been disabled, the user automatically gets a larger touch target, which may reduce the user’s need to intermittently release the GUI widgets to confirm the modified sphere’s location on the touch screen.

    The Pan Gesture

    For the purposes of ergonomics, the Pan Gesture is probably the most useful gesture. It allows users to touch the objects to be manipulated and drag them anywhere on the screen. As the modified spheres are initially positioned, users may, depending on the Ultrabook device they are using, have wrists or forearms resting on the keyboard. With the Pan Gesture functionality implemented, users can drag the modified spheres to the sides of the screen, where there may be less chance of inadvertently touching the keyboard. For additional ergonomic optimization, users can touch all four modified spheres that affect the first-person controller and drag them at the same time to a place on the screen that allows them to rest their wrists and arms as desired.

    The following two lines of code, taken from a Unity 3D example, do the work of moving the Unity 3D scene asset when the Pan Gesture is active:

    var local = new Vector3(transform.InverseTransformDirection(target.WorldDeltaPosition).x, transform.InverseTransformDirection(target.WorldDeltaPosition).y, 0);
    targetPan += transform.parent.InverseTransformDirection(transform.TransformDirection(local));

    Note that in the above code, the z component of the Vector3 is zero and that in the accompanying example, when the modified spheres are moved, or panned, they move only in the x–y plane. By modifying this Vector3, you can customize the interface a great deal. The first example that comes to mind is having a Pan Gesture result in a much faster Unity 3D asset motion on one axis than another.

    In the “Everything” example provided with TouchScript, the following line of code limits the panning of the manipulated asset on the y-axis:

    if(transform.InverseTransformDirection(transform.parent.TransformDirection(targetPan - startPan)).y < 0) targetPan = startPan;

    This line was commented out in the accompanying example but can easily be modified and implemented if you want to limit how far a user can move a GUI widget component from its original position.

    Video 1: Touch Script Multi Gesture Example

    Resolving Issues During Development of the Example

    One issue I found during development was that the Rotate Gesture never seemed to be recognized when the Press, Release, and Tap Gestures were added. To work around this issue, I added a modified sphere to the GUI widget on the left side of the screen intended for use by the left thumb. I configured this modified sphere with a script (ToggleReorganize.cs) so that when a user taps the modified sphere, a Boolean variable is toggled in the PublicVariables script. All of the modified sphere’s scripts reference this variable and disable their Press, Release, Tap, or Flick Gesture when the toggle variable is True, resulting in a UI that requires the user to tap the left thumb button to modify the widget. The user must then tap this left thumb button again when finished modifying the widget to go back to navigating the scene.

    During the process of implementing this functionality, I discovered that the right widget did not require this functionality for the widget to be modified. The user could rotate, pan, and scale the widget without tapping the left thumb modified sphere. I implemented the functionality anyway, forcing the user to tap the left thumb modified sphere in the left widget to alter the ergonomics of the right widget. I did this because the right widget became awkward to use when it was modified at the same time it was being used to navigate the scene.

    Looking Ahead

    In addition to the Unity 3D scene navigation control, users can customize the GUI widgets. They can rotate, scale, and move (pan) the components of the widget to suit their ergonomic needs. This functionality is valuable when developing applications that support multiple platforms, such as Ultrabook devices, touch laptops, and tablets. These platforms can be used in any number of environments, with users in a variety of physical positions. The more flexibility the user has to adjust GUI widget configuration in these environments, the more pleasant the user’s experience will be.

    The GUI widgets used in the accompanying example can and should be expanded to use additional GUI widget components designed for thumb use that can control assets in the game or simulation or control assets that are components of the GUI widgets. This expansion may include items in the simulation, such as weapons selection, weapons firing, camera zoom, light color, and jumping. To alter the GUI widget components, these thumb buttons can change the modified spheres to cubes or custom geometry. They can also be used to change the opacity of a material or color that GUI widget components use.

    Conclusion

    This article and the accompanying example show that using TouchScript with Unity 3D is a valid means of implementing a user-configurable GUI widget on Ultrabook devices running Windows 8. The GUI widgets implemented in the example provide a touch interface for the Unity 3D first-person controller. This interface can similarly be connected to the Unity 3D third-person controller or custom controllers simulating an overhead, driving, or flying environment.

    When developing Unity 3D GUI widgets for Ultrabook devices running Windows 8, the desired result is for users to not revert back to the keyboard and mouse. All of the functionality that is typically associated with a legacy UI (a keyboard and mouse first-person controller) should be implemented in a production-grade touch interface. By taking this into consideration when implementing the TouchScript gestures described in this article and the accompanying example, you can greatly increase your prospects for obtaining a positive user response.


    Note: The example provided with this article uses and references the examples provided with TouchScript as downloaded at no cost from the Unity 3D Asset Store.


    About the Author

    Lynn Thompson is an IT professional with more than 20 years of experience in business and industrial computing environments. His earliest experience is using CAD to modify and create control system drawings during a control system upgrade at a power utility. During this time, Lynn received his B.S. degree in Electrical Engineering from the University of Nebraska, Lincoln. He went on to work as a systems administrator at an IT integrator during the dot com boom. This work focused primarily on operating system, database, and application administration on a wide variety of platforms. After the dot com bust, he worked on a range of projects as an IT consultant for companies in the garment, oil and gas, and defense industries. Now, Lynn has come full circle and works as an engineer at a power utility. He has since earned a Masters of Engineering degree with a concentration in Engineering Management, also from the University of Nebraska, Lincoln.

    Intel, the Intel logo, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.
    Copyright © 2013 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

  • ULTRABOOK™
  • applications
  • Gesture Recognition
  • Unity 3D*
  • Microsoft Windows* 8
  • Windows*
  • Graphics
  • Microsoft Windows* 8 Desktop
  • Sensors
  • User Experience and Design
  • Laptop
  • Tablet
  • URL
  • CERTIFACE E INTEL, juntos no combate às Fraudes

    $
    0
    0

    Certiface é um aplicativo para a certificação de pessoas via web. Utiliza tecnologia de ponta em biometria facial para evitar a duplicidade das faces e combater as práticas de fraude. Seu sistema opera em nuvem, a identidade do cliente é preservada e não existe contato físico.

    Num mundo onde fraudes de documentos, clonagens de cartões e falsificações de todos os tipos são assuntos que estampam capas de jornais e telejornais, uma Solução inovadora chega tendo como o principal valor a proteção da identidade das pessoas de bem, o Certiface. É inovador em vários sentidos, traz uma tecnologia que superou as expectativas de precisão na identificação de pessoas pela face, preservando totalmente a integridade e a privacidade dos cidadãos.

    O Certiface é uma Solução que utiliza tecnologia de ponta em biometria facial para certificação de pessoas, o seu principal objetivo é evitar a duplicidade das faces e combater as práticas de fraude no mercado consumidor.

    Como solução de Prevenção à Fraude na concessão de credito, uma atividade típica de empresas que trabalham com o consumidor final e que, tanto no mercado financeiro quanto no varejo, é considerada de missão crítica ela tem que ter alta disponibilidade, baixo tempo de resposta e, em conjunto, tem que ter implementação rápida e de baixo custo.

    Para atender aos requisitos apresentados, a Solução opera em nuvem e, sua alta performance com milhões de usuários, deve-se a utilização de Bibliotecas que obtém o melhor proveito dos processadores Intel nos servidores, como as Bibliotecas TBB, IPP e MKL, utilizadas para cálculo do código biométrico, posteriormente armazenado na base centralizada do sistema. O desempenho deve-se primeiramente ao rápido processamento na operações com matrizes proporcionado pela biblioteca MKL.

    A integração com a biblioteca IPP proporcionou ao Certiface uma performance 10x superior e, em conjunto com a biblioteca TBB, o Certiface passou a utilizar todos estes recursos em paralelo, tornando possível o processamento da biometria facial simultaneamente em todos os núcleos disponíveis no sistema, aumentando significativamente a velocidade de processamento, proporcionalmente à quantidade de núcleos do processador, explorando desta forma todo o poder computacional dos equipamentos com a tecnologia Intel.

    A arquitetura Intel está presente em todas as fases de uso da Solução, desde os processadores dos Servidores que suportam a operação em nuvem, aos dispositivos móveis com Sistema Operacional Android e processadores Intel, através dos quais os usuários comunicam-se com a tecnologia Certiface. Esta combinação de tecnologias permite  o combate à fraude, em campo, com todo poder do processamento móvel. Graças a evolução destas plataformas, o processamento de localização de face na imagem, aliado a outras rotinas de visão computacional, torna o processamento distribuído e eficaz sem grande consumo de banda.

    Sendo a tecnologia pouco intrusiva, não é necessário nenhum contato físico com equipamentos, oferecendo implantação, utilização e operação da solução bastante simplificadas, tornando-a ideal para adoção, por exemplo, por parte de grandes empresas de crédito ao consumidor final. 

    Servidor INTEL – Cálculo Biométrico utiliza Bibliotecas: TBB, IPP e MKL.

     

    Acesse os links abaixo e saiba mais:

    * Texto elaborado em conjunto por Plauto Diniz e Alessandro de Oliveira Faria (Cabelo).

     

     

  • biometria facial
  • Developers
  • Partners
  • Android*
  • Apple iOS*
  • Apple Mac OS X*
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Android*
  • Business Client
  • Server
  • Advanced
  • Intel® Integrated Performance Primitives
  • Cloud Computing
  • Development Tools
  • Mobility
  • Sensors
  • Small Business
  • User Experience and Design
  • Laptop
  • Phone
  • Server
  • Tablet
  • Desktop
  • URL
  • Gameplay: Touch controls for your favorite games

    $
    0
    0

    Download Article

    Download Gameplay: Touch controls for your favorite games [PDF 703KB]

    GestureWorks Gameplay is a revolutionary new way of interacting with popular PC games. Gameplay software for Windows 8 lets gamers use and build their own Virtual Controllers for touch, which are overlaid on top of existing PC games. Each Virtual Controller overlay adds buttons, gestures, and other controls that are mapped to input the game already understands. In addition, gamers can use hundreds of personalized gestures to interact on the screen. Ideum’s collaboration with Intel gave them access to technology and engineering resources to make the touch overlay in Gameplay possible.

    Check out this one-minute video that explains the Gameplay concept.

    It’s all about the virtual controllers

    Unlike traditional game controllers, virtual controllers can be fully customized and gamers can even share them with their friends. Gameplay works on Windows 8 tablets, Ultrabooks, 2-in-one laptops, All-In-Ones, and even multitouch tables and large touch screens.


    Figure 1 - Gameplay in action on Intel Atom-based tablet

    "The Virtual Controller is real! Gameplay extends hundreds of PC games that are not touch-enabled and it makes it possible to play them on a whole new generation of portable devices, " says Jim Spadaccini, CEO of Ideum, makers of GestureWorks Gameplay. "Better than a physical controller, Gameplay’s Virtual Controllers are customizable and editable. We can’t wait to see what gamers make with Gameplay."


    Figure 2 - The Home Screen in Gameplay

    Several dozen pre-built virtual controllers for popular Windows games come with GestureWorks Gameplay (currently there are over 116 unique titles). Gameplay lets users configure, layout, and customize existing controllers as well. The software also includes an easy to use, drag-and-drop authoring tool allowing users to build their own virtual controller for many popular Windows-based games distributed on the Steam service.


    Figure 3 - Virtual Controller layout view

    Users can place joysticks, D-pads, switches, scroll wheels, and buttons anywhere on the screen, change the size, opacity, and add colors and labels. Users can also create multiple layout views which can be switched in game at any time. This allows a user to create unique views for different activities in game, such as combat versus inventory management functions in a Role Playing Game.


    Figure 4 - Virtual Controller Global Gestures View

    Powered by the GestureWorks gesture-processing engine aka GestureWorks Core, Gameplay provides support for over 200 global gestures. Basic global gestures such as tap, drag, pinch/zoom, and rotate are supported by default, but are also customizable. This allows extension of overlaid touch controllers, giving gamers access to multi-touch gestures that can provide additional controls to PC games. For example, certain combat moves can be activated with a simple gesture versus a button press in a FPS. Gameplay even includes experimental support for accelerometers so you can steer in a racing game by tilting your Ultrabook™ or tablet, and it detects when you change your 2-1 device to tablet mode to optionally turn on the virtual controller overlay.

    Challenges Addressed During Development

    Developing all this coolness was not easy, to make the vision for Gameplay a reality, several technical challenges had to be overcome. Some of these were solved using traditional programming methods, while others required more innovative solutions.

    DLL injection

    DLL injection is a method used for executing code within the address space of another process by getting it to load an external dynamically-linked library. While DLL injection is often used by external programs for nefarious reasons, there are many legitimate uses for it, including extending the behavior of a program in a way its authors did not anticipate or originally intend. With Gameplay, we needed a method to insert data into the input thread of the process (game) being played so the touch input could be translated to inputs the game understood. Of the myriad methods for implementing DLL injection, Ideum chose to use the Windows hooking calls in the SetWindowsHookEx API. Ultimately, Ideum opted to use process-specific hooking versus global hooking for performance reasons.

    Launching games from a third-party launcher

    Two methods of hooking into a target processes address space were explored. The application can hook into a running process’ address space, or the application can launch the target executable as a child process. Both methods are sound; however, in practice, it is much easier to monitor and intercept processes or threads created by the target process when the application is a parent of the target process.

    This poses a problem for application clients, such as Steam and UPlay, that are launched when a user logs in. Windows provides no guaranteed ordering for startup processes, and the Gameplay process must launch before these processes to properly hook in the overlay controls. Gameplay solves this issue by installing a lightweight system service during installation that monitors for startup applications when a user logs in. When one of the client applications of interest starts, Gameplay is then able to hook in as a parent to the process insuring the overlay controls are displayed as intended.

    Lessons Learned

    Mouse filtering

    During development, several game titles were discovered that incorrectly processed virtual mouse input received from the touch screen. This problem largely manifested with First Person Shooter titles or Role Playing Titles that have a "mouse-look" feature. The issue was that the mouse input received from the touch panel was absolute with respect to a point on the display, and thus in the game environment. This made the touch screen almost useless as a "mouse-look" device. The eventual fix was to filter out the mouse inputs by intercepting the input thread for the game. This allowed Gameplay to emulate mouse input via an on-screen control such as a joystick for the "mouse-look" function. It took a while to tune the joystick responsiveness and dead zone to feel like a mouse, but once that was done, everything worked beautifully. You can see this fix in action on games like Fallout: New Vegas or The Elder Scrolls: Skyrim.

    Vetting titles for touch gaming

    Ideum spent significant amounts of time tuning the virtual controllers for optimal gameplay. There are several elements of a game that determine its suitability for using with Gameplay. Below are some general guidelines that were developed for what types of games work well with Gameplay:

    Gameplay playability by game type

    GoodBetterBest

    Role Playing Games (RPG)

    Simulation

    Fighting

    Sports

    Racing

    Puzzles

    Real Time Strategy (RTS)

    Third Person Shooters

    Platformers

    Side Scrollers

    Action and Adventure

    While playability is certainly an important aspect of vetting a title for use with Gameplay, the most important criteria is stability. Some titles will just not work with either the hooking technique, input injection, or overlay technology. This can happen for a variety of reasons, but most commonly is due to the game title itself monitoring its own memory space or input thread to check for tampering. While Gameplay itself is a completely legitimate application, it employs techniques that can also be used for the forces of evil, so unfortunately some titles that are sensitive to these techniques will never work unless enabled for touch natively.

    User Response

    While still early in its release, Gameplay 1.0 has developed some interesting user feedback in regards to touch gaming on a PC. There are already some clear trends to the user feedback being received. At a high-level, it is clear that everyone universally loves being able to customize the touch interface for games. The remaining feedback focuses on personalizing the gaming experience in a few key areas:

    • Many virtual controllers are not ideal for left handed people, this was an early change to many of the published virtual controllers.
    • Button size and position is the most common change, so much so, that Ideum is considering adding an automatic hand sizing calibration in a future Gameplay release.
    • Many users prefer rolling touch inputs vs. discrete touch and release interaction.

    We expect many more insights to reveal themselves as the number of user created virtual controllers increases.

    Conclusion

    GestureWorks Gameplay brings touch controls to your favorite games. It does this via a combination of a visual overlay and supports additional interactions like gesture, accelerometers, and 2-1 transitions. What has been most interesting in working on this project has been the user response. People are genuinely excited about touch-gaming on PCs, and ecstatic they can now play many of the titles they previously enjoyed with touch.

    About Erik

    Erik Niemeyer is a Software Engineer in the Software & Solutions Group at Intel Corporation. Erik has been working on performance optimization of applications running on Intel microprocessors for nearly fifteen years. Erik specializes in new UI development and micro-architectural tuning. When Erik is not working he can probably be found on top of a mountain somewhere. Erik can be reached at erik.a.niemeyer@intel.com.

    About Chris

    Chris Kirkpatrick is a software applications engineer working in the Intel Software and Services Group supporting Intel graphics solutions on mobile platforms in the Visual & Interactive Computing Engineering team. He holds a B.Sc. in Computer Science from Oregon State University. Chris can be reached at chris.kirkpatrick@intel.com.

    Resources

    https://gameplay.gestureworks.com/

    http://software.intel.com/en-us/articles/detecting-slateclamshell-mode-screen-orientation-in-convertible-pc

     

    Intel, the Intel logo, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.

    Copyright © 2014 Intel Corporation. All rights reserved.

    *Other names and brands may be claimed as the property of others.

  • ideum
  • GestureWorks; Ultrabook
  • virtual controller
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Beginner
  • Game Development
  • Sensors
  • Touch Interfaces
  • User Experience and Design
  • Laptop
  • Tablet
  • URL
  • PERCEPTUAL COMPUTING: Augmenting the FPS Experience

    $
    0
    0

    Downloads

    PERCEPTUAL COMPUTING: Augmenting the FPS Experience [PDF 977KB]


    1. Introduction

    For more than a decade, we've enjoyed the evolution of the First Person Shooter (FPS) Genre, looking at games through the eyes of the protagonist and experiencing that world first hand. To exercise our control, we've been forced to communicate with our avatar through keyboard, mouse, and controllers to make a connection with that world. Thanks to Perceptual Computing, we now have additional modes of communication that bring interaction with that world much closer. This article not only covers the theory of perceptual controls in FPS games, but demonstrates actual code that allows the player to peek around corners by leaning left or right. We will also look at using voice control to select options in the game and even converse with in-game characters.

    A familiarity with the Intel® Perceptual Computing SDK is recommended but not essential, and although the code is written in Dark Basic Professional (DBP), the principals are also suited to C++, C#, and Unity*. The majority of this article will cover the theory and practise of augmenting the First Person experience and is applicable not only to games but simulations, tours, and training software.

    In this article, we’ll be looking at augmenting the FPS game genre, a popular mainstay of modern gaming and one that has little to no Perceptual Computing traction. This situation is partly due to the rigid interface expectations required from such games and partly to the relative newness of Perceptual Computing as an input medium.

    As you read this article, you will be able to see that with a little work, any FPS can be transformed into something so much more. In a simple firefight or a horror-thriller, you don't want to be looking down at your keyboard to find the correct key—you want to stay immersed in the action. Figuring out the combination of keys to activate shields, recharge health, duck behind a sandbag, and reload within a heartbeat is the domain of the veteran FPS player, but these days games belong to the whole world not just the elite. Only Perceptual Computing has the power to provide this level of control without requiring extensive practice or lightning fast hand/eye coordination.


    Figure 1. When reaction times are a factor, looking down at the keyboard is not an option

    We’ve had microphones for years, but it has only been recently that voice recognition has reached a point where arbitrary conversations can be created between the player and the computer. It’s not perfect, but it’s sufficiently accurate to begin a realistic conversation within the context of the game world.


    Figure 2. Wouldn’t it be great if you could just talk to characters with your own voice?

    You’ve probably seen a few games now that use non-linear conversation engines to create a sense of dialog using multiple choices, or a weapon that has three or four modes of fire. Both these features can be augmented with voice control to create a much deeper sense of emersion and create a more humanistic interface with the game.

    This article will look at detecting what the human player is doing and saying while playing a First Person experience, and converting that into something that makes sense in the gaming world.

    2. Why Is This Important

    As one of the youngest and now one of the largest media industries on the planet, the potential for advancement in game technology is incredible, and bridging the gap between user and computer is one of the most exciting. One step in this direction is a more believable immersive experience, and one that relies on our natural modes of interaction, instead of the artificial ones created for us.

    With a camera that can sense what we are doing and a microphone that can pick up what we say, you have almost all the ingredients to bridge this gap entirely. It only remains for developers to take up the baton and see how far they can go.

    For developers who want to push the envelope and innovate around emerging technologies, this subject is vitally important to the future of the First Person experience. There is only so much a physical controller can do, and for as long as we depend on it for all our game controls we will be confined to its limitations. For example, a controller cannot detect where we are looking in the game, it has to be fed in, which means more controls for the player. It cannot detect the intention of the player; it has to wait until a sequence of button presses has been correctly entered before the game can proceed. Now imagine a solution that eliminates this middle-man of the gaming world, and ask yourself how important it is for the future of gaming.


    Figure 3. Creative* Interactive Gesture Camera; color, depth and microphone – bridging the reality gap

    Imagine the future of FPS gaming. Imagine all your in-game conversations being conducted by talking to the characters instead of selecting buttons on the screen. Imagine your entire array of in-game player controls commanded via a small vocabulary of commonly spoken words. The importance of these methods cannot be understated, and they will surely form the basis of most, if not all, FPS game interfaces in the years to come.

    3. Detect Player Leaning

    You have probably played a few FPS games and are familiar with the Q and E keys to lean left and right to peek around corners. You might also have experienced a similar implementation where you can click the right mouse button to zoom your weapon around a corner or above an obstacle. Both game actions require additional controls from the player and add to the list of things to learn before the game starts to feel natural.

    With a perceptual computing camera installed, you can detect where the head and shoulders of your human player lie in relation to the center of the screen. By leaning left and right in the real world, you can mimic this motion in the virtual game world. No additional buttons or controls are required, just lean over to peek around a corner, or sidestep a rocket, or dodge a blow from an attacker, or simply view an object from another angle.


    Figure 4. Press E or lean your body to the right. Which one works for you?

    In practice, however, you will find this solution has a serious issue., You will notice your gaming experience disrupted by a constantly moving (even jittering) perspective as the human player naturally shifts position as the game is played. It can be disruptive to some elements of the game such as cut-scenes and fine-grain controls such as using the crosshair to select small objects in the game. There are two solutions to this: the first is to create a series of regions that signal a shift to a more extreme lean angle, and the second is to disable this feature altogether in certain game modes as mentioned above.


    Figure 5. Dividing a screen into horizontal regions allows better game leaning control

    By having these regions defined, the majority of the gaming is conducted in the center zone, and only when the player makes extreme leaning motions does the augmentation kick in and shift the game perspective accordingly.

    Implementing this technique is very simple and requires just a few commands. You can use the official Intel Perceptual Computing SDK or you can create your own commands from the raw depth data. Below is the initialization code for a module created for the DBP language and reduces the actual coding to just a few lines.

    rem Init PC
    perceptualmode=pc init()
    pc update
    normalx#=pc get body mass x()
    normaly#=pc get body mass y()

    The whole technique can be coded with just three commands. The first initializes the perceptual computing camera and returns whether the camera is present and working. The second command asks the camera to take a snapshot and do some common background calculations on the depth data. The last two lines grab something called a Body Mass Coordinate, which is the average coordinate of any foreground object in the field of the depth camera. For more information on the Body Mass Coordinate technique, read the article on Depth Data Techniques (http://software.intel.com/en-us/articles/perceptual-computing-depth-data-techniques).

    Of course detecting the horizontal zones requires a few more simple lines of code, returning an integer value that denotes the mode and then choosing an appropriate angle and shift vector that can be applied to the player camera.

    rem determine lean mode
    do
     leanmode=0
     normalx#=pc get body mass x()/screen width()
     if normalx#<0.125
      leanmode=-2
     else
      if normalx#<0.25
       leanmode=-1
      else
       if normalx#>0.875
        leanmode=2
       else
        if normalx#>0.75
         leanmode=1
        endif
       endif
      endif
     endif
     leanangle#=0.0
     leanshiftx#=leanmode*5.0
     select leanmode
      case -2 : leanangle#=-7.0 : endcase
      case -1 : leanangle#=-3.0 : endcase
      case  1 : leanangle#= 3.0 : endcase
      case  2 : leanangle#= 7.0 : endcase
     endselect
     pc update
    loop

    Applying these lean vectors to the player camera is simplicity itself, and disabling it when the game is in certain modes will ensure you get the best of both worlds. Coding this in C++ or Unity simply requires a good head tracking system to achieve the same effect. To get access to this DBP module, please contact the author via twitter at https://twitter.com/leebambertgc. The buzz you get from actually peering around a corner is very cool, and is similar to virtual/augmented reality, but without the dizziness!

    4. Detect Player Conversations

    Earlier versions of the Intel® Perceptual Computing SDK had some issues with accurate voice detection, and even when it worked it only understood a U.S. accent. The latest SDK however is superb and can deal with multiple language accents and detect British vocals very well. Running the sample code in the SDK and parroting sentence after sentence proves just how uncannily accurate it is now, and you find yourself grinning at the spectacle.

    If you’re a developer old enough to remember the ‘conversation engines’ of the 8-bit days, you will recall the experimental applications that involved the user typing anything they wanted, and the engine picking out specific trigger words and using those to carry on the conversation. It could get very realistic sometimes, but often ended with the fall-back of ‘and how do you feel about that?’


    Figure 6. A simple conversation engine from the adventure game “Relics of Deldroneye”

    Roll the clock forward about 30 years and those early experiments could actually turn out to be something quite valuable for a whole series of new innovations with Perceptual Computing. Thanks to the SDK, you can listen to the player and convert everything said into a string of text. Naturally, it does not get it right every time, but neither do humans (ever play Chinese whispers?). Once you have a string of text, you can have a lot of fun with figuring out what the player meant, and if it makes no sense, you can simply get your in-game character to repeat the question.

    A simple example would be a shopkeeper in an FPS game, opening with the sentence, “what would you like sir?” The Intel® Perceptual Computing SDK also includes a text-to-speech engine so you can even get your characters to use the spoken word, much more preferred in modern games than the ‘text-on-screen’ method. Normally in an FPS game, you would either just press a key to continue the story, or have a small multi-choice menu of several responses. Let’s assume the choices are “nothing,” “give me a health kit,” or “I want some ammo.” In the traditional interface you would select a button representing the choice you wanted through some sort of user interface mechanism.

    Using voice detection, you could parse the strings spoken by the player and look for any words that would indicate which of the three responses was used. It does not have to be the exact word or sentence as this would be almost impossible to expect and would just lead to frustration in the game. Instead, you would look for keywords in the sentence that indicate which of the three is most likely.

    NOTHING = “nothing, nout, don’t anything, bye, goodbye, see ya”

    HEALTH = “health, kit, medical, heal, energy”

    AMMO = “ammo, weapon, gun, bullets, charge”

    Of course, if the transaction was quite important in the game, you would ensure the choice made was correct with a second question to confirm it, such as “I have some brand new ammo, direct from the factory, will that do?” The answer of YES and NO can be detected with 100% certainty, which will allow the game to proceed as the player intended.

    Of course this is the most complex form of voice detection and would require extensive testing and wide vocabulary of detections to make it work naturally. The payoff is a gaming experience beyond anything currently enjoyed, allowing the player to engage directly with characters in the game.

    5. Detect Player Commands

    An easier form of voice control is the single command method, which gives the player advance knowledge of a specific list of words they can use to control the game. The Intel® Perceptual Computing SDK has two voice recognition modes “dictation” and “command and control.” The former would be used in the above complex system and the latter for the technique below.

    A game has many controls above and beyond simply moving and looking around, and depending on the type of game, can have nested control options dependent on the context you are in. You might select a weapon with a single key, but that weapon might have three different firing modes. Traditionally this would involve multiple key presses given the shortage of quick-access keys during high octane FPS action. Replace or supplement this with a voice command system, and you gain the ability to select the weapon and firing mode with a single word.


    Figure 7. Just say the word “reload”, and say goodbye to a keyboard full of controls

    The “command and control” mode allows very quick response to short words and sentences, but requires that the names you speak and the names detected are identical. Also you may find that certain words when spoken quickly will be detected as a slight variation on the word you had intended. A good trick is to add those variations to the database of detectable words so that a misinterpreted word still yields the action you wanted in the game. To this end it is recommended that you limit the database to as few words as that part of the game requires. For example, if you have not collected the “torch” in the game, you do not need to add “use torch” to the list of voice controls until it has been collected.

    It is also recommended that you remove words that are too similar to each other so that the wrong action is not triggered at crucial moments in the game play. For example, you don’t want to set off a grenade when you meant to fire off a silent grappling hook over a wall to escape an enemy.

    If the action you want to perform is not too dependent on quick reaction times, you can revert to the “dictation” mode and do more sophisticated controls such as the voice command “reload with armor piercing.” The parser would detect “reload,” “armor,” and “piercing,” The first word would trigger a reload, and the remaining ones would indicate a weapon firing mode change and trigger that.

    When playing the game, using voice to control your status will start to feel like you have a helper sitting on your shoulder, making your progress through the game much more intuitive. Obviously there are some controls you want to keep on a trigger finger such as firing, moving, looking around, ducking, and other actions that require split-second reactions. The vast majority however can be handed over to the voice control system, and the more controls you have, the more this new methods wins over the old keyboard approach.

    6. Tricks and Tips

     

    Do’s

    • Using awareness of the player’s real-world position and motion to control elements within the game will create an immediate sense of connection. Deciding when to demonstrate that connection will be the key to a great integration of Perceptual Computing.
    • Use “dictation” for conversation engines and “command and control” for instant response voice commands. They can be mixed, providing reaction time does not impede game play.
    • If you are designing your game from scratch, consider developing a control system around the ability to sense real player position and voice commands. For example a spell casting game would benefit in many ways from Perceptual Computing as the primary input method.
    • When you are using real world player detection, ensure you specify a depth image stream of 60 frames per second to give your game the fastest possible performance.

    Don’ts

    • Do not feed raw head tracking coordinates directly to the player camera, as this will create uncontrollable jittering and ruin the smooth rendering of any game.
    • Do not use voice control for game actions that require instantaneous responses. As accurate as voice control is, there is a noticeable delay between speaking the word and getting a response from the voice function.
    • Do not detect whole sentences in one string comparison. Parse the sentence into individual words and run string comparisons on each one against a larger database of word variations of similar meaning.

    7. Final Thoughts

    A veteran of the FPS gaming experience may well scoff at the concept of voice-activated weapons and real-world acrobatics to dodge rockets. The culture of modern gaming has created a total dependence on the mouse, keyboard, and controller as lifelines into these gaming worlds. Naturally, offering an alternative would be viewed with incredulity until the technology fully saturates into mainstream gaming. The same can be said of virtual reality technology, which for 20 years attempted to gain mainstream acceptance without success.

    The critical difference today is that this technology is now fast enough and accurate enough for games. Speech detection 10 years ago was laughable and motion detection was a novelty, and no game developer would touch them with a barge pole. Thanks to the Intel Perceptual Computing SDK, we now have a practical technology to exploit and one that’s both accessible to everyone and supported by peripherals available at retail.

    An opportunity exists for a few developers to really pioneer in this area, creating middleware and finished product that push the established model of what an FPS game actually is. It is said that among all the fields of computing, game technology is the one field most likely to push all aspects of the computing experience. No other software pushes the limits of the hardware as hard as games, pushing logic and graphics processing to the maximum (and often beyond) in an attempt to create a simulation more realistic and engaging than the year before. It’s fair to suppose that this notion extends to the very devices that control those games, and it’s realistic to predict that we’ll see many innovations in this area in years to come. The great news is that we already have one of those innovations right here, right now, and only requires us to show the world what amazing things it can do.

    About The Author

    When not writing articles, Lee Bamber is the CEO of The Game Creators (http://www.thegamecreators.com), a British company that specializes in the development and distribution of game creation tools. Established in 1999, the company and surrounding community of game makers are responsible for many popular brands including Dark Basic, FPS Creator, and most recently App Game Kit (AGK).

    The application that inspired this article and the blog that tracked its seven week development can be found here: http://ultimatecoderchallenge.blogspot.co.uk/2013/02/lee-going-perceptual-part-one.html

    Lee also chronicles his daily life as a coder, complete with screen shots and the occasional video here: http://fpscreloaded.blogspot.co.uk

     

    Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
    Copyright © 2014 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

  • Gesture Recognition
  • Voice Recognition
  • Lee Bamber
  • FPS
  • Developers
  • Windows*
  • C#
  • C/C++
  • Intel® Perceptual Computing SDK
  • Perceptual Computing
  • Sensors
  • User Experience and Design
  • URL
  • Meshcentral.com - Intel(R) Galileo Tower!

    $
    0
    0

    Meshcentral.com

    In the quest to improve Meshcentral, no challenge is to tall and today thanks for lots of help from my group, I am presenting pictures and video of the first ever Intel® Galileo tower! A cluster of 16 Intel® Galileo devices assembled into unique tower design using a simple and elegant 3D printed part. The resulting devices is just as impressive to see as the software running on it. The design came out of necessity when my group had ordered late last year 15 more Intel Galileo devices. For people who don’t know, these little computer are designed around an Intel® Quark processor and are perfect for many different projects what require the smarts of an Intel processor in a small package. It’s used to build all sorts of things and is one of the building blocks for the Internet-of-things.

    Meshcentral.com announced support for Intel® Galileo boards last year and since then, we made improvements and pushed out Mesh agent updates to the public. In the quest to continue improving Meshcentral to make it and outstanding solution for the Internet-of-things, I am enlisting the help of this new tower. Designed in less than 2 weeks, the 16 Intel Galileo devices are arranged in a circular tower with the network and power cables going thru the center for perfect cable management. For people with 3D printers, the part is available openly on Thingiverse. I made a first quick YouTube video about the tower.

    If you want to install Meshcentral on your own Intel Galileo board, check out this presentation.

    Enjoy!
    Ylian
    meshcentral.com

    Pictures of the assembly process and finished product.
    Check out first quick YouTube video about the tower.

    Intel Galileo Tower Pictures

    The tower was fully designed using Blender ahead of time. A virtual
    Intel Galileo board was used to make the design fit correctly.

    How the Intel Galileo Tower was made

    With outstanding software to go with the hardware, the Mesh agent forms a peer-to-peer
    network within the tower and allows for secure messaging and scalable Internet control.

    Intel Galileo Software

  • Mesh
  • Meshcentreal
  • MeshCentral.com
  • p2p
  • arduino
  • Galileo
  • Intel Galileo
  • Tower
  • Intel Galileo Tower
  • Galileo Tower
  • Ylian
  • 3D Printer
  • Icon Image: 

  • Cloud Computing
  • Cluster Computing
  • Development Tools
  • Embedded
  • Open Source
  • Security
  • Sensors
  • Cloud Services
  • Developers
  • Partners
  • Professors
  • Students
  • Linux*
  • Yocto Project
  • Intel Learning Series para desarrolladores para Android* - Página principal

    $
    0
    0

    Esta es una versión en línea del libro de Intel Press An Introduction... Intel for Android* Developers (Introducción... Intel para desarrolladores Android*). Ingenieros de distintos equipos de Intel, incluidos los de informática móvil, Android* OS, herramientas de desarrollo y software y servicios, junto con expertos en la materia externos a Intel contribuyeron para crear “Intel Learning Series para desarrolladores para Android*”. Se está trabajando en nuevas secciones que estarán disponibles para la comunidad Intel Android* (http://software.intel.com/es-es/android/) próximamente:

    Intel Learning Series para desarrolladores para Android*, n.º 1: Introducción a Android* en procesadores Intel

    1. Intel y Android*, breve historia
        1.1 Proyecto Android x86
        1.2 Plataforma Moorestown
        1.3 Sociedad entre Intel y Google
        1.4 Platforma Medfield
    2. Dispositivos 3
        2.1 Avaya Flare*
        2.2 ViewSonic ViewPad 10*
        2.3 Lava Xolo X900*
        2.4 Lenovo K800*
        2.5 Orange San Diego*
        2.6 Próximos dispositivos
    3. Compatibilidad de Android SDK/NDK con x86
    4. Software Intel para Android
        4.3 Supervisor de Ejecución Acelerada por Hardware Intel® (Intel® HAXM)
        4.2 Analizador de Rendimiento de Gráficos Intel® (Intel GPA)
        4.3 Imagen de Sistema Intel® AtomTM X86 para Android* 4.0

    Intel Learning Series para desarrolladores para Android*, n.º 2: Los procesadores Intel para dispositivos móviles

    1. Dentro de Medfield
        1.1 Descripción general de Saltwell
    2. Diferencias de arquitectura entre Saltwell y ARM (Cortex A15)
        2.1 Arquitectura
        2.2 Pipelines para enteros
        2.3 Conjuntos de instrucciones
        2.4 Compatibilidad multinúcleo/multihilo
        2.5 Tecnología de seguridad
    3. Tecnología Intel® Hyper-Threading
    4. Compatibilidad de aplicaciones: Kit de desarrollo nativo y traductor binario

    Intel Learning Series para desarrolladores para Android*, n.º 3: Teléfonos Android* con procesadores Intel

    1. Lava Xolo X900
        1.1 Hardware
        1.2 Software
        1.3 Bancos de pruebas
    2. Lenovo K800*
        2.1 Hardware
        2.2 Software
        2.3 Bancos de pruebas
    3. Orange San Diego*
        3.1 Hardware
        3.2 Software
        3.3 Bancos de pruebas

    Intel Learning Series para desarrolladores para Android*, n.º 4: Sensores de tabletas Android*

    1. Sensores en tabletas Android con procesador Intel® Atom™
    2. Marco de trabajo de sensores en Android
        2.1 Cómo obtener la configuración del sensor
        2.2 Sistema de coordenadas de sensores
        2.3 Eventos de sensor de monitorización
        2.4 Sensores de movimiento
        2.5 Sensores de posición
        2.6 Sensores de entorno
        2.7 Pautas de optimización y rendimiento de sensores
    3. GPS y ubicación
        3.1 Servicios de localización de Android
        3.2 Cómo obtener actualizaciones de ubicación por GPS
        3.3 Pautas de optimización y rendimiento del GPS y la localización
    4. Resumen

    Intel Learning Series para desarrolladores para Android*, n.º 5: Cómo instalar el SDK de Android* para la arquitectura Intel®

    1. Sistemas operativos compatibles
    2. Requisitos de hardware
    3. Instalación del JDK
    4. Instalación de Eclipse*
    5. Instalación de Apache Ant (opcional)
    6. Descarga del SDK Starter Package y cómo agregar los componentes del SDK
    7. Cómo configurar Eclipse para que funcione con el SDK
        7.1 Instalación del ADT Plugin para Eclipse
        7.2 Configuración del ADT Plugin
    8. Información general sobre la emulación de Android Virtual Device
    9. ¿Qué emulador debería usar?
    10. Razones para usar el emulador
    11. Creación de una imagen de emulador
    12. Cómo configurar el SDK para usar imágenes del emulador x86
    13. Ejemplo de prestación de Gingerbread: estadísticas de uso de batería
    14. Ejemplo de prestación de Gingerbread: administrador de tareas
    15. Ejemplo de prestación de Gingerbread: copiar y pegar texto
    16. Emulación de Ice Cream Sandwich
    17. Guía de instalación
        17.1 Prerrequisitos
        17.2 Cómo hacer descargas por medio del Android SDK Manager
        17.3 Uso de la imagen de sistema
        17.4 Descarga manual
        17.5 Aceleración de la CPU
        17.6 Aceleración de la GPU

    Intel Learning Series para desarrolladores para Android*, n.º 6: Depuración en Android* OS

    1. Prerrequisitos
        1.1 Controlador USB Intel® para dispositivos Android
        1.2 Instalación de la Imagen de Sistema Intel® Atom™ x86 para el emulador de Android
    2. Depuración de aplicaciones con Android Debug Bridge
        2.1 Configuración de ADB
        2.2 ADB en Windows
        2.3 Comunicación host-cliente de ADB
        2.4 Comandos clave para dispositivos ADB
        2.5 Uso del complemento Android Debug Tools para Eclipse
            2.5.1 La perspectiva de depuración de Eclipse
            2.5.2 La perspectiva DDMS
            2.5.3 Entorno de ejecución de aplicaciones para depuración
    3. Supervisor de Ejecución Acelerada por Hardware Intel®
        3.1 Instalación de la KVM
        3.2 Uso de un kernel de 64 bits
        3.3 Cómo instalar la KVM
        3.4 Cómo iniciar el dispositivo virtual de Android
        3.5 Uso del AVD Manager en Eclipse para iniciar un dispositivo virtual
    4. Cómo ejecutar Android dentro de Oracle* VirtualBox*
        4.1 Destinos de compilación de VirtualBox x86 de Google para Android 4.x
            4.1.1 Descarga del árbol de código fuente e instalación del repositorio
        4.2 Creación de un kernel personalizado que admita mouse
            4.2.1 Cómo agregar el kernel con parche
            4.2.2 Reducción del tiempo de compilación con el uso de CCACHE
            4.2.3 Cómo compilar Android 4.0.x con nuevo kernel
        4.3 Cómo compilar el disco de VirtualBox y el instalador de Android
        4.4 Como usar un disco de instalación de Android para crear una partición virtual grande
        4.5 Puerto serie
        4.6 Ethernet
            4.6.1 Notas finales
    5. Depuración con GDB, el depurador de GNU Project
    6. El Analizador de Rendimiento de Gráficos Intel® (Intel® GPA)
    7. Depuración de sistema de Android OS ejecutándose en un procesador Intel® Atom™
        7.1 Depuración JTAG
        7.2 Depuración en Android OS
        7.3 Depuración de controladores de dispositivos
        7.4 Puntos de interrupción de hardware
    8. Depuración cruzada: procesador Intel® Atom™ y arquitectura ARM
        8.1 Instrucciones de longitud variable
        8.2 Interrupciones de hardware
        8.3 Paso único
        8.4 Mapeo de memoria virtual
    9. Consideraciones respecto de la tecnología Intel® Hyper-Threading
    10. SoC e interacción de multinúcleo heterogéneo
        10.1 SVEN (System Visible Event Nexus)
        10.2 Depuración por codificación/decodificación de señal
        10.3 Beneficios de SVEN
    11. Resumen
    12. Referencias

    Intel Learning Series para desarrolladores para Android*, n.º 7: Creación y portación de aplicaciones de Android* basadas en el NDK para la arquitectura Intel®

    1. Introducción al Kit de Desarrollo Nativo
    2. Compilación de una aplicación “Hello, world!” con el NDK
        2.1 Preparación y validación del entorno
        2.2 Compilación con el compilador GNU*
        2.3 Cómo compilar con el Compilador Intel® C++
        2.4 Empaquetamiento de bibliotecas compartidas del Compilador Intel® C++
    3. Opciones del Compilador Intel® C++
        3.1 Opciones de compatibilidad
        3.2 Opciones de rendimiento
    4. Vectorización
        4.1 Informe de vectorización
        4.2 Pragmas
        4.3 Optimizaciones interprocedimentales
        4.4 Limitaciones de la autovectorización
        4.5 Optimización interprocedimental

    Intel Learning Series para desarrolladores para Android*, n.º 8: Compilación de Android* OS para procesadores Intel®

    1. Creación de imágenes de Android* con el compilador GNU* para Android
        1.1 Preparación del espacio de trabajo
        1.2 Configuración del entorno de compilación
        1.3 Compilación de la imagen
    2. Compilación de kernels
        2.1 Compilación del kernel con el compilador GNU* para Android
        2.2 Compilación del kernel con el Compilador Intel® C++ para Android
    3. Compilación de imágenes con el Compilador Intel® C++ para Android
        3.1 Integración del compilador de Intel
        3.2 Configuración flexible del sistema de compilación

    Intel Learning Series para desarrolladores para Android*, n.º 9: Representación de gráficos en Android para la arquitectura Intel® mediante el uso de la biblioteca SVG

    1. Funcionalidad de SVG
    2. Formas de SVG
    3. Integración de la biblioteca SVG
    4. Cómo representar un archivo con SVG modificada
        4.1 ¿Por qué conviene usar SAX?
            4.1.1 ¿Qué es SAX?
            4.1.2 Beneficios
            4.1.3 Desventajas
        4.2 Cómo implementar el analizador SAX en Android
        4.3 ¿Cuál es la razón para modificar la biblioteca SVG original?
        4.4 Archivo XML SVG con atributos en la etiqueta de representación
        4.5 Archivo XML SVG con atributos en la etiqueta de grupo

    Intel Learning Series para desarrolladores para Android*, n.º 10: GPU en Android* para el procesador Intel® Atom™

    1. Introducción
    2. Evolución de las GPU
    3. Dos grandes modelos de diseño de GPU para móviles
        3.1 Ventajas de las GPU de modo diferido
        3.2 Ventajas de las GPU de modo inmediato
    4. Optimización para GPU de Intel
    5. Conclusión

    Intel Learning Series para desarrolladores para Android*, n.º 11: Compatibilidad, rendimiento y características de OpenGL ES* para Android* en el procesador Intel® Atom™

    1. Introducción
    2. Imágenes de sistema x86 para emulación por dispositivo virtual de Android
    3. Analizadores de Rendimiento de Gráficos Intel
    4. Dónde obtener los controladores de OpenGL ES
    5. La GPU PowerVR*
    6. Extensiones de OpenGL ES
    7. Rendimiento de punto flotante
    8. El SDK de marco de trabajo de Android
    9. El NDK de Android
    10. Renderscript
    11. Conclusión

  • Android Developer Learning
  • Icon Image: 

  • Courseware
  • Sample Code
  • Technical Article
  • Tutorial
  • Intel® Atom™ Processors
  • Mobility
  • Porting
  • Sensors
  • Touch Interfaces
  • OpenGL*
  • Android*
  • Phone
  • Developers
  • Partners
  • Professors
  • Students
  • Android*
  • 向安卓开发人员推介第四代英特尔® 凌动™ 处理器 BayTrail

    $
    0
    0

    下载向安卓开发人员推介第四代英特尔® 凌动™ 处理器 BayTrail.pdf

    摘要


    英特尔已推出第四代英特尔凌动处理器,代号为 BayTrail。 这款最新凌动处理器是多核片上系统(SoC),集成了最新一代英特尔® 处理器内核、显卡、内存和 I/O 接口。 它还是英特尔基于 22 纳米处理器技术的首款片上系统。这一多核凌动处理器可提供出色的计算性能,能效也高于上代处理器。 除了采用最新 IA 内核技术,该处理器还提供众多平台特性,例如,显卡、连接、安全和传感器,支持开发人员开发用户体验极其丰富的软件。 本文主要介绍 BayTrail 对安卓的影响,英特尔为安卓架构提供的增强特性,以及英特尔为安卓开发人员提供的解决方案。
     
     

    目录


    • BayTrail SoC CPU 优势
    • BayTrail SoC 组件增强特性
    • BayTrail 相对上代凌动处理器的改进
    • BayTrail 面向安卓的版本 – Z36XXX 和 Z37XXX
    • 英特尔面向安卓软件的优化
    • 英特尔面向凌动架构安卓平台的工具
    • 参考

    BayTrail SoC CPU 优势


    本节概述 BayTrail CPU 功能。 最新多核英特尔® 凌动™ SoC 采用英特尔® Silvermont 微架构,能够以低能耗交付更高处理速度。

          显著提升性能
    • 四核架构可支持 4 内核/4 线程乱序处理和 2 MB 二级缓存,通过同时运行多个应用与服务提高设备的运行速度与响应性能。
    • Burst technology 2.0 可在必要时启用额外内核,支持 CPU 密集型应用更快速、更顺畅地运行
    • 采用 22 纳米处理器技术提高性能:
      • 在通电状态下优化电流以提高性能
      • 在断电状态下减小泄露以提高能效
    • 支持 64 位操作系统
          高效电源管理
    • 支持 CPU 和 IP(如显卡)之间的动态电源共享,提高峰值频率
    • 片上系统的总能耗预算根据应用需求动态分配
    • 支持精细优调的低功率状态,优化电源管理,延长续航时间
    • 在深度睡眠状态下支持缓存留存,降低闲置功耗,缩短唤醒时间
    • 工作续航时间超过 10 小时

    BayTrail CPU 规格(Nutshell)

    BayTrail SoC 组件增强特性


    除了处理器内核,英特尔还对片上系统的组件进行诸多改进,例如,显卡,图像,音频,显示,存储,USB 和安全性。 这些组件可支持开发人员开发面向 IA 架构的安卓设备的创新型软件。 以下是每个组件的亮点介绍。

    • 显示
      • 支持高清显示(分辨率高达 2560x1600 @ 60 Hz)
      • 支持视网膜屏幕
      • 支持双屏显示
    • 英特尔® 无线显示技术 (WiDi)
      • 支持 1080p/30 和双声道立体声
      • HDCP2.1 内容保护(Widevine DRM)
      • 支持多任务处理
      • 支持双屏应用
      • WFA Miracast 认证
    • 显卡和媒体引擎
      • 基于英特尔 Gen7 高清显卡处理器,提供惊艳视图
      • 支持显卡爆发,Open GL ES 3.3,以及多媒体格式的硬件视频编解码加速
      • 支持广泛的视频与现实后期处理
      • 视觉惊艳的高清显卡,流畅的高清视频播放与互联网流媒体,续航时间分别在 8-10 小时以上
    • 图像信号处理器
      • 支持 ISP 2.0
      • 支持最多两个 8 MP 摄像头
      • 支持各种图像技术,例如,爆发模式,连续采集,低光降噪、视频稳定,3A 和零快门延时。
    • USB
      • 支持 USB 3.0
    • 音频
      • 低功耗音频引擎
      • 支持多个音频格式
    • 存储
      • 支持一个 SDIO 3.0 控制器
      • 支持一个 eMMC 4.51 控制器
      • 支持一个 SDXC 控制器
    • 安全
      • 支持安全启动
      • 英特尔® 可信执行引擎(英特尔® TXE)

    SoC 组件规格(Nutshell)

    BayTrail 相对上代凌动处理器的改进


    英特尔于 2012 年发布了首款面向安卓手机的凌动处理器 - Z24XX,代号为“Medfield”,这是基于英特尔 32 纳米处理器技术的单核处理器。 2013 年春季,英特尔发布了 Medfield 处理器的改进版 — 面向手机和平板电脑的 Z25XX 系列,代号为“CloverTrail+”,这是基于英特尔 32 纳米处理器技术的双核处理器。 2013 年秋季,英特尔发布了最新凌动处理器 — Z3XXX BayTrail,它分为双核和四核两款,均基于英特尔最新 22 纳米处理器技术。 BayTrail 具备众多增强特性。 下表汇总了 BayTrail 相对上代处理器的增强特性。

    BayTrail 相对上代片上系统的增强特性

    BayTrail 面向安卓的版本 – Z36XXX 和 Z37XXX


    下表汇总了 BayTrail 面向安卓的不同版本。

    BayTrail SoC 版本

    英特尔面向安卓软件的优化


    安卓是谷歌面向手机和平板电脑而开发的基于 Linux 的开源软件。 谷歌通过安卓开放源代码项目(AOSP)向公众发布官方代码。 计划发布安卓设备的原始设备制造商(OEM)可与谷歌合作,根据各自的平台需求修改版本。 安卓软件包括:

    • Linux 内核– 包括设备驱动与内存、安全、电源管理的相关软件。
    • 中间件– 包含面向应用开发的原生库,例如,媒体,SQLite,OpenGL,SSL,显卡和 WebKit。
    • 安卓运行时间– 包含运行 Java 应用所需的 Java 核心库和 Dalvik 虚拟机。
    • 安卓框架– 包含开发安卓应用与服务所需的 Java 类或 API。
    • 应用– 包含安卓应用。

    安卓版本已经从最初版本 CupCake 演进至新近版本 JellyBean (4.2)以及最新版本 KitKat (4.4)。 BayTrail 支持 JellyBean 和 KitKat 版本。英特尔为提高安卓软件的性能提供了众多优化。 开发人员可开发性能出色的应用,为最终用户提供平滑、流畅的体验。

          优化包括:
    • 增强特性的目标是确保 Dalvik 应用在英特尔处理器上顺畅运行
    • 面向 NDK 开发人员的工具有助于编译面向 x86 的原生代码(C/C++)
    • 面向 HTML5 和 Javascript 等新兴 web 技术的优化
    • 面向 Dalvik 虚拟机的性能增强特性
    • 通过推动 AOSP 优化核心库和内核
    • 验证并优化设备驱动程序,提高 x86 平台的电源与内存使用效率

    英特尔面向安卓软件的优化

    英特尔面向凌动架构安卓平台的工具


    谷歌提供一整套工具帮助开发人员在安卓平台上构建和调试软件。 开发人员需要安装安卓软件开发套件,并与所选择的 IDE 进行集成,从而构建软件。 谷歌还提供模拟器、调试器、代码优化器、性能优化器以及测试工具。

    开发人员可使用下表中列出的初始工具开始开发安卓软件。

    不仅谷歌提供有安卓工具,英特尔还提供工具专门帮助开发人员加快凌动架构安卓平台的软件开发。

    英特尔工具特性汇总

    参考


    1. BayTrail Z36XXX 和 Z37XXX 数据表:http://www.intel.com/content/www/us/en/processors/atom/atom-z36xxx-z37xxx-datasheet-vol-1.html
    2. 面向 Android* 平板电脑的英特尔® 凌动™ 处理器 Z3000 系列介绍:http://www.intel.com/content/www/us/en/processors/atom/atom-z3000-android-tablets-brief.html?wapkw=android+atom+processor
    3. 英特尔 IDF 2013 演示:
      • 利用英特尔® 架构平台构建 Android* 系统
      • 企业环境的平板电脑解决方案: 利用英特尔® 技术获取差异化优势
      • 面向英特尔® 显卡的显示技术
      • 动手实验室: 开发、优化、调试与优调 Android* 应用
      • 利用 Android* 应用支持的辅屏幕 API 和英特尔® 无线显示特性
      • 加快英特尔® 平台 Android* 应用的软件开发
      • 开发原生安卓应用并面向英特尔® 架构而优化
      • 技术解析: 面向平板电脑的英特尔® 平台,代号为 Bay Trail-T
      • 技术解析: 英特尔 Silvermont 微架构
      • 采用 Android* 系统和英特尔® 凌动™ 处理器的平板电脑

    其它相关文章与资源

    Bay Trail: IDF 2013 亮相
    Android* 英特尔® 架构模拟器(Gingerbread*)
    面向英特尔 IA 的安卓多线程编程
    英特尔® 软件开发模拟器
    基于英特尔® 凌动™ 平台的 Android* 应用开发与优化
    如需深入了解面向安卓开发人员的英特尔工具,请访问:面向安卓的英特尔® 开发人员专区

     

  • Developers
  • Intel AppUp® Developers
  • Partners
  • Professors
  • Students
  • Android*
  • Android*
  • Android* Development Tools
  • Intel Hardware Accelerated Execution Manager (HAXM)
  • Intel® C++ Compiler
  • Intel® JTAG Debugger
  • Intel® Threading Building Blocks
  • Intel® Graphics Performance Analyzers
  • Development Tools
  • Education
  • Intel® Itanium® Processors
  • Mobility
  • Optimization
  • Security
  • Sensors
  • Phone
  • Tablet
  • URL

  • Intel Learning Series para desarrolladores para Android*, n.º 4: Sensores de tabletas Android

    $
    0
    0

    1. Sensores en tabletas Android con procesador Intel® Atom™

    Las tabletas basadas en procesadores Intel Atom admiten una amplia variedad de sensores de hardware. Estos sensores se usan para detectar el movimiento y los cambios de posición, e informar los parámetros de entorno del ambiente. El diagrama de bloques de la Figura 1 muestra una posible configuración de sensores en una tableta Android basada en procesador Intel Atom típica.

    Sobre la base de los datos que informan, podemos clasificar los sensores en las clases y tipos que se muestran más abajo, en la Tabla 4.1.

    Tabla 4.1    Tipos de sensores que admite la plataforma Android

    Sensores de posición
    Sensores de posición
    Acelerómetro (TYPE_ACCELEROMETER)Mide las aceleraciones del dispositivo en m/s2Detección de movimiento
    Giroscopio (TYPE_GYROSCOPE)Mide la velocidad de rotación del dispositivoDetección de rotación
    Magnetómetro (TYPE_MAGNETIC_FIELD)Mide la intensidad del campo geomagnético de la Tierra en µTBrújula
    Proximidad (TYPE_PROXIMITY)Mide la proximidad de los objetos en cmDetección de objeto cercano
    GPS (no es un tipo de sensor de android.hardware)Determina la ubicación geográfica precisa del dispositivoDetección precisa de ubicación geográfica
    Sensores de entornoSensor de luz ambiente (TYPE_LIGHT)Mide el nivel de luz ambiente en lxControl automático de brillo de la pantalla

    Marco de trabajo de sensores en Android

    El marco de trabajo de sensores en Android proporciona mecanismos que permiten acceder a los sensores y sus datos, con la excepción del GPS, al que se accede mediante los servicios de localización de Android. Esto lo analizaremos más adelante en este mismo capítulo. El marco de trabajo de sensores forma parte del paquete android.hardware. En la Tabla 4.2 se incluye una lista de las clases e interfaces principales en las que consiste el marco de trabajo de sensores.

    Tabla 4.2    Marco de trabajo de sensores en la plataforma Android

    NombreTipoDescripción
    SensorManagerClaseSe usa para crear una instancia del servicio de sensor. Proporciona diferentes métodos para acceder a los sensores, registrar y cancelar el registro de procesos de escucha (listeners) de eventos de sensor, etc.
    SensorClaseSe usa para crear una instancia de un sensor específico.
    SensorEventClaseEl sistema la usa para publicar datos de sensores. Incluye los valores de datos de sensor sin procesar, la precisión de los datos y una marca de tiempo.
    SensorEventListenerInterfazProporciona métodos de devolución de llamada (callback) para recibir notificaciones de SensorManager cuando han cambiado los datos del sensor o la precisión del sensor.

    2.1 Cómo obtener la configuración del sensor

    Es decisión exclusiva del fabricante qué sensores estarán disponibles en el dispositivo. Es posible usar el marco de trabajo de sensores para descubrir cuáles son los sensores disponibles en el tiempo de ejecución; pare eso se debe invocar el método getSensorList() de SensorManager con el parámetro “Sensor.TYPE_ALL”. El Ejemplo de código 1 muestra en un fragmento una lista de sensores disponibles y el vendedor, la potencia y la precisión de la información de cada sensor.

    package com.intel.deviceinfo;      
    import java.util.ArrayList;
    import java.util.HashMap;
    import java.util.List;
    import java.util.Map; 	 
    import android.app.Fragment;
    import android.content.Context;
    import android.hardware.Sensor;
    import android.hardware.SensorManager;
    import android.os.Bundle;
    import android.view.LayoutInflater;
    import android.view.View;
    import android.view.ViewGroup;
    import android.widget.AdapterView;
    import android.widget.AdapterView.OnItemClickListener;
    import android.widget.ListView;
    import android.widget.SimpleAdapter; 
    	 
    public class SensorInfoFragment extends Fragment {  
        private View mContentView;  
        private ListView mSensorInfoList;     
        SimpleAdapter mSensorInfoListAdapter;
        private List<Sensor> mSensorList; 
     
        private SensorManager mSensorManager;  
        @Override
        public void onActivityCreated(Bundle savedInstanceState) {
            super.onActivityCreated(savedInstanceState);
        }
        @Override
        public void onPause() 
        { 
            super.onPause();
        }
        @Override
        public void onResume() 
        {
            super.onResume();
        }
        @Override
        public View onCreateView(LayoutInflater inflater, ViewGroup container,
                Bundle savedInstanceState) {
            mContentView = inflater.inflate(R.layout.content_sensorinfo_main, null);
            mContentView.setDrawingCacheEnabled(false);
            mSensorManager = (SensorManager)getActivity().getSystemService(Context.SENSOR_SERVICE);
            mSensorInfoList = (ListView)mContentView.findViewById(R.id.listSensorInfo);
            mSensorInfoList.setOnItemClickListener( new OnItemClickListener() {
                @Override
                public void onItemClick(AdapterView<?> arg0, View view, int index, long arg3) {
                    // with the index, figure out what sensor was pressed
                    Sensor sensor = mSensorList.get(index);
                    // pass the sensor to the dialog.
                    SensorDialog dialog = new SensorDialog(getActivity(), sensor);
                    dialog.setContentView(R.layout.sensor_display);
                    dialog.setTitle("Sensor Data");
                    dialog.show();
                }
            });            
            return mContentView;
        }      
        void updateContent(int category, int position) {
            mSensorInfoListAdapter = new SimpleAdapter(getActivity(), 
              getData() , android.R.layout.simple_list_item_2,
              new String[] {
                  "NAME",
                  "VALUE"
              },
              new int[] { android.R.id.text1, android.R.id.text2 });
          mSensorInfoList.setAdapter(mSensorInfoListAdapter);
        }
        protected void addItem(List<Map<String, String>> data, String name, String value)   {
            Map<String, String> temp = new HashMap<String, String>();
            temp.put("NAME", name);
            temp.put("VALUE", value);
            data.add(temp);
        }  
        private List<? extends Map<String, ?>> getData() {
            List<Map<String, String>> myData = new ArrayList<Map<String, String>>();
            mSensorList = mSensorManager.getSensorList(Sensor.TYPE_ALL);
            for (Sensor sensor : mSensorList ) {
                addItem(myData, sensor.getName(),  "Vendor: " + sensor.getVendor() + ", min. delay: " + sensor.getMinDelay() +", power while in use: " + sensor.getPower() + "mA, maximum range: " + sensor.getMaximumRange() + ", resolution: " + sensor.getResolution());
            }
            return myData;
        }
    }

    Ejemplo de código 1: Fragmento que muestra la lista de sensores**. Fuente: Intel Corporation, 2012

    2.2 Sistema de coordenadas de sensores

     

    El marco de trabajo de sensores comunica datos de sensores mediante el uso de un sistema de coordenadas estándar de 3 ejes, en el cual X, Y y Z están representadas por values[0], values[1] y values[2], respectivamente, en el objeto SensorEvent.

    0 0 1 95 544 intel 4 1 638 14.0

    Sensores tales como el de luz, el de temperatura y el de proximidad devuelven un solo valor. Para estos sensores sólo se usa values[0] en el objeto SensorEvent.

    Otros sensores comunican datos en el sistema de coordenadas estándar de 3 ejes. La siguiente es una lista de esos sensores:

    • Acelerómetro
    • Sensor de gravedad
    • Giroscopio
    • Sensor de campo geomagnético

    El sistema de coordenadas de sensores de 3 ejes se define con relación a la pantalla del dispositivo en su orientación natural (predeterminada). En el caso de las tabletas, la orientación natural es por lo general la horizontal, mientras que para los teléfonos es la vertical. Cuando se sostiene un dispositivo en su orientación natural, el eje x es horizontal y apunta a la derecha, el eje y es vertical y apunta hacia arriba y el eje z apunta hacia fuera de la pantalla. La Figura 4.2 muestra el sistema de coordenadas de sensores para una tableta.


    Figura 4.2. Sistema de coordenadas de sensores
    Fuente: Intel Corporation, 2012

    La cuestión más importante que se debe tener en cuenta es que el sistema de coordenadas del sensor nunca cambia cuando el dispositivo se mueve o se modifica su orientación.

    2.3 Eventos de sensor de monitorización

    El marco de trabajo de sensores informa datos a los objetos SensorEvent. Para monitorizar los datos de un sensor específico, una clase puede implementar la interfaz SensorEventListener y registrarse en el SensorManager del sensor. El marco de trabajo informa a la clase acerca de los cambios de estado del sensor por medio de los siguientes dos métodos de devolución de llamada de SensorEventListener implementados por la clase:

    onAccuracyChanged()
    y
    onSensorChanged()

    En el Ejemplo de código 2 se implementa el SensorDialog utilizado en el ejemplo de SensorInfoFragment que vimos en la sección “Cómo obtener la configuración del sensor”.

    package com.intel.deviceinfo;
      
    import android.app.Dialog;
    import android.content.Context;
    import android.hardware.Sensor;
    import android.hardware.SensorEvent;
    import android.hardware.SensorEventListener;
    import android.hardware.SensorManager;
    import android.os.Bundle;
    import android.widget.TextView;
      
    public class SensorDialog extends Dialog implements SensorEventListener {
        Sensor mSensor;
        TextView mDataTxt;
        private SensorManager mSensorManager; 
           
      
        public SensorDialog(Context ctx, Sensor sensor) {
            this(ctx);
            mSensor = sensor;
        }
           
        @Override
        protected void onCreate(Bundle savedInstanceState) {
            super.onCreate(savedInstanceState);
            mDataTxt = (TextView) findViewById(R.id.sensorDataTxt);
            mDataTxt.setText("...");
            setTitle(mSensor.getName());
        }
           
        @Override
        protected void onStart() {
            super.onStart();
            mSensorManager.registerListener(this, mSensor,  SensorManager.SENSOR_DELAY_FASTEST);
        }
                 
        @Override
        protected void onStop() {
            super.onStop();
            mSensorManager.unregisterListener(this, mSensor);
        }
      
        @Override
        public void onAccuracyChanged(Sensor sensor, int accuracy) {
        }
      
        @Override
        public void onSensorChanged(SensorEvent event) {
            if (event.sensor.getType() != mSensor.getType()) {
                return;
            }
            StringBuilder dataStrBuilder = new StringBuilder();
            if ((event.sensor.getType() == Sensor.TYPE_LIGHT)||
                (event.sensor.getType() == Sensor.TYPE_TEMPERATURE)) {
                dataStrBuilder.append(String.format("Data: %.3fn", event.values[0]));
            }
            else{         
                dataStrBuilder.append( 
                    String.format("Data: %.3f, %.3f, %.3fn", 
                    event.values[0], event.values[1], event.values[2] ));
            }
            mDataTxt.setText(dataStrBuilder.toString());
        }
    }

    Ejemplo de código 2: Diálogo que muestra los valores del sensor**

    2.4 Sensores de movimiento

    Los sensores de movimiento se usan para monitorizar el movimiento del dispositivo, ya sea agitación, giro, balanceo o inclinación. El acelerómetro y el giroscopio son dos sensores de movimiento de los cuales disponen muchas tabletas y teléfonos.

    Los sensores de movimiento comunican datos en el sistema de coordenadas de sensores, en el cual los tres valores del objeto SensorEvent, values[0], values[1] y values[2], representan valores para los ejes x, y y z, respectivamente.

    Para entender los sensores de movimiento y aplicar los datos en una aplicación, necesitamos usar algunas fórmulas físicas relacionadas con la fuerza, la masa, la aceleración, las leyes del movimiento de Newton y la relación entre varias de estas entidades a lo largo del tiempo. Aquel que desee conocer más acerca de estas fórmulas y relaciones puede consultar libros de texto de física o materiales de dominio público.

    El acelerómetro mide la aceleración que se aplica al dispositivo.

    Tabla 4.3    El acelerómetro        Fuente: Intel Corporation, 2012

    SensorTipoDatos de SensorEvent (m/s2)Descripción
    AcelerómetroTYPE_ACCELEROMETERvalues[0]Aceleración en el eje x
    values[1]Aceleración en el eje y
    values[2]Aceleración en el eje z

    El concepto del acelerómetro se deriva de la segunda ley del movimiento de Newton:
    a = F/m

    La aceleración de un objeto es el resultado de la fuerza neta que se aplica al objeto. Entre las fuerzas externas se incluye la que se aplica a todos los objetos de la Tierra: la gravedad. Es directamente proporcional a la fuerza neta F que se aplica al objeto e inversamente proporcional a la masa m del objeto.

    En nuestro código, en lugar de usar en forma directa la ecuación de arriba, por lo general nos interesa el resultado que produce la aceleración en la velocidad y la posición del dispositivo durante un período de tiempo. La siguiente ecuación describe la relación entre la velocidad v1 de un objeto, su velocidad original v0, la aceleración a y el tiempo t:
    v1 = v0 + at

    Para calcular el desplazamiento s del objeto, usamos la ecuación siguiente:
    s = v0t + (1/2)at2

    En muchos casos, comenzamos con la condición v0 igual a 0 (antes de que el dispositivo comience a moverse), lo que simplifica la ecuación. Así queda:
    s = at2/2

    Debido a la gravedad, la aceleración gravitacional, que se representa con el símbolo g, se aplica a todo objeto que se encuentre en la Tierra. Esta magnitud es independiente de la masa del objeto. Sólo depende de la altura del objeto respecto del nivel del mar. Su valor varía entre 9,78 y 9,82 (m/s2). Adoptamos el valor estándar convencional de g:
    g = 9,80665 (m/s2)

    Como el acelerómetro devuelve los valores según un sistema de coordenadas multidimensional, en nuestro código podemos calcular las distancias a lo largo de los ejes x, y y z con las siguientes ecuaciones.

    Sx = AxT2/2

    Sy=AyT2/2

    Sz=AzT2/2

    Donde Sx, Sy y Sz son los desplazamientos sobre el eje x, el eje y y el eje z, respectivamente, y Ax, Ay y Az son las aceleraciones sobre los ejes x, y y z, respectivamente. T es el tiempo del período medido.

    En el Ejemplo de código 3 se muestra cómo crear una instancia de acelerómetro.

    public class SensorDialog extends Dialog implements SensorEventListener {
        ... 
        private Sensor mSensor;
        private SensorManager mSensorManager; 
           
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mSensor = mSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
        ...
    }

    Ejemplo de código 3: Creación de una instancia de acelerómetro (**)
    Fuente: Intel Corporation, 2012

    A veces no usamos los valores de las tres dimensiones. También puede ocurrir que necesitemos tomar en cuenta la orientación del dispositivo. Por ejemplo, cuando desarrollamos una aplicación para un laberinto, sólo usamos la aceleración gravitacional en los ejes x e y para calcular las distancias y las direcciones de movimiento de la bola a partir de la orientación del dispositivo. El siguiente fragmento de código (Ejemplo de código 4) describe la lógica.

    @Override
    public void onSensorChanged(SensorEvent event) {
        if (event.sensor.getType() != Sensor.TYPE_ACCELEROMETER) {
            return;
        } 
    float accelX, accelY;
    ...
    //detect the current rotation currentRotation from its “natural orientation”
    //using the WindowManager
        switch (currentRotation) {
            case Surface.ROTATION_0:
                accelX = event.values[0];
                accelY = event.values[1];
                break;
            case Surface.ROTATION_90:
                accelX = -event.values[0];
                accelY = event.values[1];
                break;
            case Surface.ROTATION_180:
                accelX = -event.values[0];
                accelY = -event.values[1];
                break;
            case Surface.ROTATION_270:
                accelX = event.values[0];
                accelY = -event.values[1];
                break;
        }
        //calculate the ball’s moving distances along x, and y using accelX, accelY and the time delta
            ...
        }
    }

    Ejemplo de código 4: Consideración de la orientación del dispositivo cuando se usan datos del acelerómetro en un juego de laberinto**
    Fuente: Intel Corporation, 2012

    El giroscopio mide la velocidad de rotación del dispositivo alrededor de los ejes x, y y z, como se muestra en la Tabla 4.4. Los valores de datos del giroscopio pueden ser positivos o negativos. Si se mira el origen desde una posición en el semieje positivo y si la rotación alrededor del eje es en sentido contrario al de las agujas del reloj, el valor es positivo; si la rotación es en el sentido de las agujas del reloj, el valor es negativo. También podemos determinar el sentido de los valores del giroscopio con la “regla de la mano derecha”, que se ilustra en la Figura 4.3.


     

     

     

     

     

     

     

    Tabla 4.4.    El giroscopio        Fuente: Intel Corporation, 2012

    SensorTipoDatos de SensorEvent (rad/s)Descripción
    GiroscopioTYPE_GYROSCOPEvalues[0]Velocidad de rotación alrededor del eje x
    values[1]Velocidad de rotación alrededor del eje y
    values[2]Velocidad de rotación alrededor del eje z

    En el Ejemplo de código 5 se muestra cómo crear una instancia de giroscopio.

    public class SensorDialog extends Dialog implements SensorEventListener {
        ... 
        private Sensor mGyro;
        private SensorManager mSensorManager; 
           
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mGyro = mSensorManager.getDefaultSensor(Sensor.TYPE_GYROSCOPE);
        ...
    }

    Ejemplo de código 5: Creación de una instancia de giroscopio**
    Fuente: Intel Corporation, 2012

    2.5 Sensores de posición

    Muchas tabletas Android admiten dos sensores de posición: el magnetómetro y el sensor de proximidad. El magnetómetro mide las intensidades del campo magnético terrestre a lo largo de los ejes x, y y z, mientras que el sensor de proximidad detecta la distancia del dispositivo a otro objeto.

    2.5.1 Magnetómetro

    El uso más importante que da el sistema Android al magnetómetro (que se describe en la Tabla 4.5) es la implementación de la brújula.

    Tabla 4.5    El magnetómetro        Fuente: Intel Corporation, 2012

    SensorTipoDatos de SensorEvent (µT)Descripción
    MagnetómetroTYPE_MAGNETIC_FIELDvalues[0]Intensidad del campo magnético terrestre a lo largo del eje x
    values[1]Intensidad del campo magnético terrestre a lo largo del eje y
    values[2]Intensidad del campo magnético terrestre a lo largo del eje z

    En el Ejemplo de código 6 se muestra cómo crear una instancia de magnetómetro.

    public class SensorDialog extends Dialog implements SensorEventListener {
        ... 
        private Sensor mMagnetometer;
        private SensorManager mSensorManager; 
           
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mMagnetometer = mSensorManager.getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD);
        ...
    }

    Ejemplo de código 6: Creación de una instancia de magnetómetro**
    Fuente: Intel Corporation, 2012

    2.5.2 Proximidad

    El sensor de proximidad proporciona la distancia entre el dispositivo y otro objeto. El dispositivo lo puede usar para detectar si está siendo sostenido cerca del usuario (ver Tabla 4.6) y así determinar si el usuario está haciendo o recibiendo llamadas telefónicas.

    Tabla 4.6    El sensor de proximidad        Fuente: Intel Corporation, 2012

    SensorTipoDatos de SensorEventDescripción
    ProximidadTYPE_PROXIMITYvalues[0]Distancia en cm respecto de un objeto. Algunos sensores de proximidad sólo informan un valor booleano para indicar si el objeto está suficientemente cerca.

    En el Ejemplo de código 7 se muestra cómo crear una instancia de sensor de proximidad.

    public class SensorDialog extends Dialog implements SensorEventListener {
        ... 
        private Sensor mProximity;
        private SensorManager mSensorManager; 
           
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mProximity = mSensorManager.getDefaultSensor(Sensor.TYPE_PROXIMITY);
        ...
    }

    Ejemplo de código 7: Creación de una instancia de sensor de proximidad**
    Fuente: Intel Corporation, 2012

    2.6 Sensores de entorno

    Los sensores de entorno detectan e informan los parámetros de entorno del ambiente en el que se encuentra el dispositivo. La disponibilidad de cada sensor en particular depende sólo del fabricante del dispositivo. El sensor de luz ambiente (ALS) está disponible en muchas tabletas Android.

    2.6.1 Sensor de luz ambiente (ALS)

    El sistema usa el sensor de luz ambiente, que se describe en la Tabla 4.7, para detectar la iluminación del entorno y ajustar automáticamente el brillo de la pantalla según lo detectado.

    Tabla 4.7    El sensor de luz ambiente        Fuente: Intel Corporation, 2012

    SensorTipoDatos de SensorEvent (lx)Descripción
    Sensor de luz ambienteTYPE_LIGHTvalues[0]Iluminación cerca del dispositivo

    En el Ejemplo de código 8 se muestra cómo crear una instancia del sensor de luz ambiente.

        ... 
        private Sensor mALS;
        private SensorManager mSensorManager; 
      
        ... 
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mALS = mSensorManager.getDefaultSensor(Sensor.TYPE_LIGHT);
        ...

    Ejemplo de código 8: Creación de una instancia de sensor de luz ambiente**
    Fuente: Intel Corporation, 2012

    2.7 Pautas de optimización y rendimiento de sensores

    Para usar sensores en sus aplicaciones, es aconsejable que siga las siguientes pautas:

    • Antes de usar un sensor específico, siempre compruebe que esté disponible
      La plataforma Android no exige la inclusión o exclusión de sensores específicos en el dispositivo. Los sensores que se incluyen es algo que decide el fabricante del dispositivo en forma exclusiva. Antes de usar un sensor en su aplicación, siempre compruebe primero que esté disponible.
    • Siempre cancele el registro de los procesos de escucha del sensor
      Si la actividad que implementa el proceso de escucha del sensor se vuelve invisible, o si el diálogo se detiene, cancele el registro del proceso de escucha del sensor. Se puede hacer con el método onPause() de la actividad o el método onStop() del diálogo. Si no cumple con esta pauta, el sensor continuará adquiriendo datos y, como consecuencia, agotará la batería.
    • No bloquee el método onSensorChanged()
      El sistema llama con frecuencia al método onSensorChanged() para informar datos de sensor. Debe haber la menor cantidad de lógica posible dentro de este método. Los cálculos complicados con datos del sensor se deben mover hacia fuera de este método.
    • Siempre pruebe en dispositivos reales las aplicaciones que usen sensores
      Todos los sensores que se describen en esta sección son de tipo hardware. Es posible que el emulador de Android no sea suficiente para simular las funciones y el rendimiento de los sensores.

    3 GPS y ubicación

    El Sistema de Posicionamiento Global (Global Positioning System, GPS) es un sistema satelital que proporciona información precisa de ubicación geográfica en todo el mundo. Está disponible en una gran variedad de tabletas Android. En muchos aspectos, se comporta como un sensor de posición. Puede proporcionar datos precisos de ubicación para las aplicaciones que se ejecutan en el dispositivo. En la plataforma Android, el marco de trabajo de sensores no maneja el GPS de manera directa. Lo que ocurre, en cambio, es que el servicio de localización Android accede a los datos del GPS y los transfiere a las aplicaciones a través de las devoluciones de llamada de los procesos de escucha de ubicación.

    3.1 Servicios de localización de Android

    Usar el GPS no es la única manera de obtener la información de ubicación en los dispositivos Android. El sistema también puede utilizar Wi-Fi*, redes celulares y otras redes inalámbricas para hacerse con la ubicación actual del dispositivo. El GPS y las redes inalámbricas (incluidas las Wi-Fi y las celulares) actúan como “proveedores de ubicación” para los servicios de localización de Android. En la Tabla 4.8 se incluye una lista de las clases e interfaces principales que se usan para acceder a los servicios de localización de Android:

    Tabla 4.8    El servicio de localización de la plataforma Android        Fuente: Intel Corporation, 2012

    NombreTipoDescripción
    LocationManagerClaseSe usa para acceder a los servicios de localización. Proporciona diversos métodos para solicitar actualizaciones periódicas de ubicación para una aplicación, o para enviar alertas de proximidad.
    LocationProviderClase abstractaEs la supercalse abstracta para proveedores de ubicación.
    LocationClaseLos proveedores de ubicación la utilizan para encapsular datos geográficos.
    LocationListenerInterfazSe usa para recibir notificaciones de LocationManager.

    3.2 Cómo obtener actualizaciones de ubicación por GPS

     

    De manera similar al mecanismo de usar el marco de trabajo de sensores para acceder a datos de sensores, la aplicación implementa varios métodos de devolución de llamada definidos en la interfaz LocationListener para recibir actualizaciones de ubicación por GPS. LocationManager envía notificaciones de actualización de GPS a la aplicación por medio de estas devoluciones de llamada (la regla “No nos llame, nosotros lo llamaremos”).

    Para acceder a los datos de ubicación GPS de la aplicación, es necesario que solicite el permiso de acceso a ubicación precisa en su archivo de manifiesto de Android (Ejemplo de código 9).

    <manifest ...>
    ...
        <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"...  
    </manifest>

    Ejemplo de código 9: Cómo solicitar el permiso de acceso a ubicación precisa en el archivo de manifiesto**
    Fuente: Intel Corporation, 2012

    En el Ejemplo de código 10 se muestra cómo obtener actualizaciones del GPS y mostrar las coordenadas de latitud y longitud en una vista de texto de diálogo.

    package com.intel.deviceinfo;
      
    import android.app.Dialog;
    import android.content.Context;
    import android.location.Location;
    import android.location.LocationListener;
    import android.location.LocationManager;
    import android.os.Bundle;
    import android.widget.TextView;
      
    public class GpsDialog extends Dialog implements LocationListener {
        TextView mDataTxt;
        private LocationManager mLocationManager;
           
        public GpsDialog(Context context) {
            super(context);
            mLocationManager = (LocationManager)context.getSystemService(Context.LOCATION_SERVICE);
        }
      
        @Override
        protected void onCreate(Bundle savedInstanceState) {
            super.onCreate(savedInstanceState);
                 mDataTxt = (TextView) findViewById(R.id.sensorDataTxt);
              mDataTxt.setText("...");
                 
            setTitle("Gps Data");
        }
           
        @Override
        protected void onStart() {
            super.onStart();
            mLocationManager.requestLocationUpdates(
                LocationManager.GPS_PROVIDER, 0, 0, this);
        }
                 
        @Override
        protected void onStop() {
            super.onStop();
            mLocationManager.removeUpdates(this);
        }
      
        @Override
        public void onStatusChanged(String provider, int status, 
            Bundle extras) {
        }
      
        @Override
        public void onProviderEnabled(String provider) {
        }
      
        @Override
        public void onProviderDisabled(String provider) {
        }
      
        @Override
        public void onLocationChanged(Location location) {
            StringBuilder dataStrBuilder = new StringBuilder();
            dataStrBuilder.append(String.format("Latitude: %.3f,   Logitude%.3fn", location.getLatitude(), location.getLongitude()));
            mDataTxt.setText(dataStrBuilder.toString());
                 
        }
    }

    Ejemplo de código 10: Diálogo que muestra los datos de ubicación del GPS**
    Fuente: Intel Corporation, 2012

    3.3 Pautas de optimización y rendimiento del GPS y la localización

    El GPS proporciona la información de ubicación más exacta del dispositivo. Sin embargo, al ser una prestación de hardware, consume energía adicional. Por otra parte, al GPS le lleva tiempo obtener sus primeros datos de ubicación. Las siguientes son algunas pautas que se deben seguir al desarrollar aplicaciones que utilicen el GPS y datos de ubicación:

    • Considere todos los proveedores posibles
      Además de GPS_PROVIDER, está NETWORK_PROVIDER. Si sus aplicaciones sólo necesitan los datos de ubicación aproximada, puede considerar el uso de NETWORK_PROVIDER.
    • Use las ubicaciones guardadas en el caché
      Al GPS le lleva tiempo obtener sus primeros datos de ubicación. Cuando la aplicación está esperando que el GPS obtenga una actualización de ubicación precisa, puede usar primero las ubicaciones que proporciona el método LocationManager’s getlastKnownLocation() para realizar parte del trabajo.
    • Reduzca al mínimo la frecuencia y la duración de las solicitudes de actualización de ubicación
      Debe solicitar la solicitud de ubicación sólo cuando sea necesario y cancelar el registro del administrador de ubicación cuando ya no necesite las actualizaciones.

    4. Resumen

    La plataforma Android proporciona interfaces de programación de aplicaciones (API) para que los desarrolladores accedan a los sensores integrados de los dispositivos. Estos sensores son capaces de proporcionar datos sin procesar acerca del movimiento, la posición y las condiciones de entorno del ambiente actuales del dispositivo con gran precisión. Al desarrollar aplicaciones que usen sensores, debe seguir los procedimientos recomendados para mejorar el rendimiento y aumentar la eficiencia.


    Aviso de optimización

    Los compiladores de Intel pueden o no optimizar al mismo grado para microprocesadores que no sean de Intel en el caso de optimizaciones que no sean específicas para los microprocesadores de Intel. Entre estas optimizaciones se encuentran las de los conjuntos de instrucciones SSE2, SSE3 y SSE3, y otras. Intel no garantiza la disponibilidad, la funcionalidad ni la eficacia de ninguna optimización en microprocesadores no fabricados por Intel.

    Las optimizaciones de este producto que dependen de microprocesadores se crearon para utilizarlas con microprocesadores de Intel. Ciertas optimizaciones no específicas para la microarquitectura de Intel se reservan para los microprocesadores de Intel. Consulte las guías para el usuario y de referencia correspondientes si desea obtener más información relacionada con los conjuntos de instrucciones específicos cubiertos por este aviso.

  • Intel for Android Developers Learning Series
  • Developers
  • Partners
  • Professors
  • Students
  • Android*
  • Android*
  • Advanced
  • Beginner
  • Intermediate
  • Sensors
  • Phone
  • Tablet
  • URL
  • Intel Learning Series para desarrolladores para Android*, n.º 4: Sensores de tabletas Android

    $
    0
    0

    1. Sensores en tabletas Android con procesador Intel® Atom™

    Las tabletas basadas en procesadores Intel Atom admiten una amplia variedad de sensores de hardware. Estos sensores se usan para detectar el movimiento y los cambios de posición, e informar los parámetros de entorno del ambiente. El diagrama de bloques de la Figura 1 muestra una posible configuración de sensores en una tableta Android basada en procesador Intel Atom típica.

    Sobre la base de los datos que informan, podemos clasificar los sensores en las clases y tipos que se muestran más abajo, en la Tabla 4.1.

    Tabla 4.1    Tipos de sensores que admite la plataforma Android

    Sensores de posición
    Sensores de posición
    Acelerómetro (TYPE_ACCELEROMETER)Mide las aceleraciones del dispositivo en m/s2Detección de movimiento
    Giroscopio (TYPE_GYROSCOPE)Mide la velocidad de rotación del dispositivoDetección de rotación
    Magnetómetro (TYPE_MAGNETIC_FIELD)Mide la intensidad del campo geomagnético de la Tierra en µTBrújula
    Proximidad (TYPE_PROXIMITY)Mide la proximidad de los objetos en cmDetección de objeto cercano
    GPS (no es un tipo de sensor de android.hardware)Determina la ubicación geográfica precisa del dispositivoDetección precisa de ubicación geográfica
    Sensores de entornoSensor de luz ambiente (TYPE_LIGHT)Mide el nivel de luz ambiente en lxControl automático de brillo de la pantalla

    Marco de trabajo de sensores en Android

    El marco de trabajo de sensores en Android proporciona mecanismos que permiten acceder a los sensores y sus datos, con la excepción del GPS, al que se accede mediante los servicios de localización de Android. Esto lo analizaremos más adelante en este mismo capítulo. El marco de trabajo de sensores forma parte del paquete android.hardware. En la Tabla 4.2 se incluye una lista de las clases e interfaces principales en las que consiste el marco de trabajo de sensores.

    Tabla 4.2    Marco de trabajo de sensores en la plataforma Android

    NombreTipoDescripción
    SensorManagerClaseSe usa para crear una instancia del servicio de sensor. Proporciona diferentes métodos para acceder a los sensores, registrar y cancelar el registro de procesos de escucha (listeners) de eventos de sensor, etc.
    SensorClaseSe usa para crear una instancia de un sensor específico.
    SensorEventClaseEl sistema la usa para publicar datos de sensores. Incluye los valores de datos de sensor sin procesar, la precisión de los datos y una marca de tiempo.
    SensorEventListenerInterfazProporciona métodos de devolución de llamada (callback) para recibir notificaciones de SensorManager cuando han cambiado los datos del sensor o la precisión del sensor.

    2.1 Cómo obtener la configuración del sensor

    Es decisión exclusiva del fabricante qué sensores estarán disponibles en el dispositivo. Es posible usar el marco de trabajo de sensores para descubrir cuáles son los sensores disponibles en el tiempo de ejecución; pare eso se debe invocar el método getSensorList() de SensorManager con el parámetro “Sensor.TYPE_ALL”. El Ejemplo de código 1 muestra en un fragmento una lista de sensores disponibles y el vendedor, la potencia y la precisión de la información de cada sensor.

    package com.intel.deviceinfo;      
    import java.util.ArrayList;
    import java.util.HashMap;
    import java.util.List;
    import java.util.Map; 	 
    import android.app.Fragment;
    import android.content.Context;
    import android.hardware.Sensor;
    import android.hardware.SensorManager;
    import android.os.Bundle;
    import android.view.LayoutInflater;
    import android.view.View;
    import android.view.ViewGroup;
    import android.widget.AdapterView;
    import android.widget.AdapterView.OnItemClickListener;
    import android.widget.ListView;
    import android.widget.SimpleAdapter; 
    	 
    public class SensorInfoFragment extends Fragment {  
        private View mContentView;  
        private ListView mSensorInfoList;     
        SimpleAdapter mSensorInfoListAdapter;
        private List<Sensor> mSensorList; 
     
        private SensorManager mSensorManager;  
        @Override
        public void onActivityCreated(Bundle savedInstanceState) {
            super.onActivityCreated(savedInstanceState);
        }
        @Override
        public void onPause() 
        { 
            super.onPause();
        }
        @Override
        public void onResume() 
        {
            super.onResume();
        }
        @Override
        public View onCreateView(LayoutInflater inflater, ViewGroup container,
                Bundle savedInstanceState) {
            mContentView = inflater.inflate(R.layout.content_sensorinfo_main, null);
            mContentView.setDrawingCacheEnabled(false);
            mSensorManager = (SensorManager)getActivity().getSystemService(Context.SENSOR_SERVICE);
            mSensorInfoList = (ListView)mContentView.findViewById(R.id.listSensorInfo);
            mSensorInfoList.setOnItemClickListener( new OnItemClickListener() {
                @Override
                public void onItemClick(AdapterView<?> arg0, View view, int index, long arg3) {
                    // with the index, figure out what sensor was pressed
                    Sensor sensor = mSensorList.get(index);
                    // pass the sensor to the dialog.
                    SensorDialog dialog = new SensorDialog(getActivity(), sensor);
                    dialog.setContentView(R.layout.sensor_display);
                    dialog.setTitle("Sensor Data");
                    dialog.show();
                }
            });            
            return mContentView;
        }      
        void updateContent(int category, int position) {
            mSensorInfoListAdapter = new SimpleAdapter(getActivity(), 
              getData() , android.R.layout.simple_list_item_2,
              new String[] {
                  "NAME",
                  "VALUE"
              },
              new int[] { android.R.id.text1, android.R.id.text2 });
          mSensorInfoList.setAdapter(mSensorInfoListAdapter);
        }
        protected void addItem(List<Map<String, String>> data, String name, String value)   {
            Map<String, String> temp = new HashMap<String, String>();
            temp.put("NAME", name);
            temp.put("VALUE", value);
            data.add(temp);
        }  
        private List<? extends Map<String, ?>> getData() {
            List<Map<String, String>> myData = new ArrayList<Map<String, String>>();
            mSensorList = mSensorManager.getSensorList(Sensor.TYPE_ALL);
            for (Sensor sensor : mSensorList ) {
                addItem(myData, sensor.getName(),  "Vendor: " + sensor.getVendor() + ", min. delay: " + sensor.getMinDelay() +", power while in use: " + sensor.getPower() + "mA, maximum range: " + sensor.getMaximumRange() + ", resolution: " + sensor.getResolution());
            }
            return myData;
        }
    }

    Ejemplo de código 1: Fragmento que muestra la lista de sensores**. Fuente: Intel Corporation, 2012

    2.2 Sistema de coordenadas de sensores

    El marco de trabajo de sensores comunica datos de sensores mediante el uso de un sistema de coordenadas estándar de 3 ejes, en el cual X, Y y Z están representadas por values[0], values[1] y values[2], respectivamente, en el objeto SensorEvent.

    0 0 1 95 544 intel 4 1 638 14.0

    Sensores tales como el de luz, el de temperatura y el de proximidad devuelven un solo valor. Para estos sensores sólo se usa values[0] en el objeto SensorEvent.

    Otros sensores comunican datos en el sistema de coordenadas estándar de 3 ejes. La siguiente es una lista de esos sensores:

    • Acelerómetro
    • Sensor de gravedad
    • Giroscopio
    • Sensor de campo geomagnético

    El sistema de coordenadas de sensores de 3 ejes se define con relación a la pantalla del dispositivo en su orientación natural (predeterminada). En el caso de las tabletas, la orientación natural es por lo general la horizontal, mientras que para los teléfonos es la vertical. Cuando se sostiene un dispositivo en su orientación natural, el eje x es horizontal y apunta a la derecha, el eje y es vertical y apunta hacia arriba y el eje z apunta hacia fuera de la pantalla. La Figura 4.2 muestra el sistema de coordenadas de sensores para una tableta.


    Figura 4.2. Sistema de coordenadas de sensores
    Fuente: Intel Corporation, 2012

    La cuestión más importante que se debe tener en cuenta es que el sistema de coordenadas del sensor nunca cambia cuando el dispositivo se mueve o se modifica su orientación.

    2.3 Eventos de sensor de monitorización

    El marco de trabajo de sensores informa datos a los objetos SensorEvent. Para monitorizar los datos de un sensor específico, una clase puede implementar la interfaz SensorEventListener y registrarse en el SensorManager del sensor. El marco de trabajo informa a la clase acerca de los cambios de estado del sensor por medio de los siguientes dos métodos de devolución de llamada de SensorEventListener implementados por la clase:

    onAccuracyChanged()
    y
    onSensorChanged()

    En el Ejemplo de código 2 se implementa el SensorDialog utilizado en el ejemplo de SensorInfoFragment que vimos en la sección “Cómo obtener la configuración del sensor”.

    package com.intel.deviceinfo;
      
    import android.app.Dialog;
    import android.content.Context;
    import android.hardware.Sensor;
    import android.hardware.SensorEvent;
    import android.hardware.SensorEventListener;
    import android.hardware.SensorManager;
    import android.os.Bundle;
    import android.widget.TextView;
      
    public class SensorDialog extends Dialog implements SensorEventListener {
        Sensor mSensor;
        TextView mDataTxt;
        private SensorManager mSensorManager; 
           
      
        public SensorDialog(Context ctx, Sensor sensor) {
            this(ctx);
            mSensor = sensor;
        }
           
        @Override
        protected void onCreate(Bundle savedInstanceState) {
            super.onCreate(savedInstanceState);
            mDataTxt = (TextView) findViewById(R.id.sensorDataTxt);
            mDataTxt.setText("...");
            setTitle(mSensor.getName());
        }
           
        @Override
        protected void onStart() {
            super.onStart();
            mSensorManager.registerListener(this, mSensor,  SensorManager.SENSOR_DELAY_FASTEST);
        }
                 
        @Override
        protected void onStop() {
            super.onStop();
            mSensorManager.unregisterListener(this, mSensor);
        }
      
        @Override
        public void onAccuracyChanged(Sensor sensor, int accuracy) {
        }
      
        @Override
        public void onSensorChanged(SensorEvent event) {
            if (event.sensor.getType() != mSensor.getType()) {
                return;
            }
            StringBuilder dataStrBuilder = new StringBuilder();
            if ((event.sensor.getType() == Sensor.TYPE_LIGHT)||
                (event.sensor.getType() == Sensor.TYPE_TEMPERATURE)) {
                dataStrBuilder.append(String.format("Data: %.3fn", event.values[0]));
            }
            else{         
                dataStrBuilder.append( 
                    String.format("Data: %.3f, %.3f, %.3fn", 
                    event.values[0], event.values[1], event.values[2] ));
            }
            mDataTxt.setText(dataStrBuilder.toString());
        }
    }

    Ejemplo de código 2: Diálogo que muestra los valores del sensor**

    2.4 Sensores de movimiento

    Los sensores de movimiento se usan para monitorizar el movimiento del dispositivo, ya sea agitación, giro, balanceo o inclinación. El acelerómetro y el giroscopio son dos sensores de movimiento de los cuales disponen muchas tabletas y teléfonos.

    Los sensores de movimiento comunican datos en el sistema de coordenadas de sensores, en el cual los tres valores del objeto SensorEvent, values[0], values[1] y values[2], representan valores para los ejes x, y y z, respectivamente.

    Para entender los sensores de movimiento y aplicar los datos en una aplicación, necesitamos usar algunas fórmulas físicas relacionadas con la fuerza, la masa, la aceleración, las leyes del movimiento de Newton y la relación entre varias de estas entidades a lo largo del tiempo. Aquel que desee conocer más acerca de estas fórmulas y relaciones puede consultar libros de texto de física o materiales de dominio público.

    El acelerómetro mide la aceleración que se aplica al dispositivo.

    Tabla 4.3    El acelerómetro        Fuente: Intel Corporation, 2012

    SensorTipoDatos de SensorEvent (m/s2)Descripción
    AcelerómetroTYPE_ACCELEROMETERvalues[0]Aceleración en el eje x
    values[1]Aceleración en el eje y
    values[2]Aceleración en el eje z

    El concepto del acelerómetro se deriva de la segunda ley del movimiento de Newton:
    a = F/m

    La aceleración de un objeto es el resultado de la fuerza neta que se aplica al objeto. Entre las fuerzas externas se incluye la que se aplica a todos los objetos de la Tierra: la gravedad. Es directamente proporcional a la fuerza neta F que se aplica al objeto e inversamente proporcional a la masa m del objeto.

    En nuestro código, en lugar de usar en forma directa la ecuación de arriba, por lo general nos interesa el resultado que produce la aceleración en la velocidad y la posición del dispositivo durante un período de tiempo. La siguiente ecuación describe la relación entre la velocidad v1 de un objeto, su velocidad original v0, la aceleración a y el tiempo t:
    v1 = v0 + at

    Para calcular el desplazamiento s del objeto, usamos la ecuación siguiente:
    s = v0t + (1/2)at2

    En muchos casos, comenzamos con la condición v0 igual a 0 (antes de que el dispositivo comience a moverse), lo que simplifica la ecuación. Así queda:
    s = at2/2

    Debido a la gravedad, la aceleración gravitacional, que se representa con el símbolo g, se aplica a todo objeto que se encuentre en la Tierra. Esta magnitud es independiente de la masa del objeto. Sólo depende de la altura del objeto respecto del nivel del mar. Su valor varía entre 9,78 y 9,82 (m/s2). Adoptamos el valor estándar convencional de g:
    g = 9,80665 (m/s2)

    Como el acelerómetro devuelve los valores según un sistema de coordenadas multidimensional, en nuestro código podemos calcular las distancias a lo largo de los ejes x, y y z con las siguientes ecuaciones.

    Sx = AxT2/2

    Sy=AyT2/2

    Sz=AzT2/2

    Donde Sx, Sy y Sz son los desplazamientos sobre el eje x, el eje y y el eje z, respectivamente, y Ax, Ay y Az son las aceleraciones sobre los ejes x, y y z, respectivamente. T es el tiempo del período medido.

    En el Ejemplo de código 3 se muestra cómo crear una instancia de acelerómetro.

    public class SensorDialog extends Dialog implements SensorEventListener {
        ... 
        private Sensor mSensor;
        private SensorManager mSensorManager; 
           
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mSensor = mSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
        ...
    }

    Ejemplo de código 3: Creación de una instancia de acelerómetro (**)
    Fuente: Intel Corporation, 2012

    A veces no usamos los valores de las tres dimensiones. También puede ocurrir que necesitemos tomar en cuenta la orientación del dispositivo. Por ejemplo, cuando desarrollamos una aplicación para un laberinto, sólo usamos la aceleración gravitacional en los ejes x e y para calcular las distancias y las direcciones de movimiento de la bola a partir de la orientación del dispositivo. El siguiente fragmento de código (Ejemplo de código 4) describe la lógica.

    @Override
    public void onSensorChanged(SensorEvent event) {
        if (event.sensor.getType() != Sensor.TYPE_ACCELEROMETER) {
            return;
        } 
    float accelX, accelY;
    ...
    //detect the current rotation currentRotation from its “natural orientation”
    //using the WindowManager
        switch (currentRotation) {
            case Surface.ROTATION_0:
                accelX = event.values[0];
                accelY = event.values[1];
                break;
            case Surface.ROTATION_90:
                accelX = -event.values[0];
                accelY = event.values[1];
                break;
            case Surface.ROTATION_180:
                accelX = -event.values[0];
                accelY = -event.values[1];
                break;
            case Surface.ROTATION_270:
                accelX = event.values[0];
                accelY = -event.values[1];
                break;
        }
        //calculate the ball’s moving distances along x, and y using accelX, accelY and the time delta
            ...
        }
    }

    Ejemplo de código 4: Consideración de la orientación del dispositivo cuando se usan datos del acelerómetro en un juego de laberinto**
    Fuente: Intel Corporation, 2012

    El giroscopio mide la velocidad de rotación del dispositivo alrededor de los ejes x, y y z, como se muestra en la Tabla 4.4. Los valores de datos del giroscopio pueden ser positivos o negativos. Si se mira el origen desde una posición en el semieje positivo y si la rotación alrededor del eje es en sentido contrario al de las agujas del reloj, el valor es positivo; si la rotación es en el sentido de las agujas del reloj, el valor es negativo. También podemos determinar el sentido de los valores del giroscopio con la “regla de la mano derecha”, que se ilustra en la Figura 4.3.


    Tabla 4.4.    El giroscopio        Fuente: Intel Corporation, 2012

    SensorTipoDatos de SensorEvent (rad/s)Descripción
    GiroscopioTYPE_GYROSCOPEvalues[0]Velocidad de rotación alrededor del eje x
    values[1]Velocidad de rotación alrededor del eje y
    values[2]Velocidad de rotación alrededor del eje z<

    En el Ejemplo de código 5 se muestra cómo crear una instancia de giroscopio.

    public class SensorDialog extends Dialog implements SensorEventListener {
        ... 
        private Sensor mGyro;
        private SensorManager mSensorManager; 
           
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mGyro = mSensorManager.getDefaultSensor(Sensor.TYPE_GYROSCOPE);
        ...
    }

    Ejemplo de código 5: Creación de una instancia de giroscopio**
    Fuente: Intel Corporation, 2012

    2.5 Sensores de posición

    Muchas tabletas Android admiten dos sensores de posición: el magnetómetro y el sensor de proximidad. El magnetómetro mide las intensidades del campo magnético terrestre a lo largo de los ejes x, y y z, mientras que el sensor de proximidad detecta la distancia del dispositivo a otro objeto.

    2.5.1 Magnetómetro

    El uso más importante que da el sistema Android al magnetómetro (que se describe en la Tabla 4.5) es la implementación de la brújula.

    Tabla 4.5    El magnetómetro        Fuente: Intel Corporation, 2012

    SensorTipoDatos de SensorEvent (µT)Descripción
    MagnetómetroTYPE_MAGNETIC_FIELDvalues[0]Intensidad del campo magnético terrestre a lo largo del eje x
    values[1]Intensidad del campo magnético terrestre a lo largo del eje y
    values[2]Intensidad del campo magnético terrestre a lo largo del eje z

    En el Ejemplo de código 6 se muestra cómo crear una instancia de magnetómetro.

    public class SensorDialog extends Dialog implements SensorEventListener {
        ... 
        private Sensor mMagnetometer;
        private SensorManager mSensorManager; 
           
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mMagnetometer = mSensorManager.getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD);
        ...
    }

    Ejemplo de código 6: Creación de una instancia de magnetómetro**
    Fuente: Intel Corporation, 2012

    2.5.2 Proximidad

    El sensor de proximidad proporciona la distancia entre el dispositivo y otro objeto. El dispositivo lo puede usar para detectar si está siendo sostenido cerca del usuario (ver Tabla 4.6) y así determinar si el usuario está haciendo o recibiendo llamadas telefónicas.

    Tabla 4.6    El sensor de proximidad        Fuente: Intel Corporation, 2012

    SensorTipoDatos de SensorEventDescripción
    ProximidadTYPE_PROXIMITYvalues[0]Distancia en cm respecto de un objeto. Algunos sensores de proximidad sólo informan un valor booleano para indicar si el objeto está suficientemente cerca.

    En el Ejemplo de código 7 se muestra cómo crear una instancia de sensor de proximidad.

    public class SensorDialog extends Dialog implements SensorEventListener {
        ... 
        private Sensor mProximity;
        private SensorManager mSensorManager; 
           
        public SensorDialog(Context context) {
            super(context);
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mProximity = mSensorManager.getDefaultSensor(Sensor.TYPE_PROXIMITY);
        ...
    }

    Ejemplo de código 7: Creación de una instancia de sensor de proximidad**
    Fuente: Intel Corporation, 2012

    2.6 Sensores de entorno

    Los sensores de entorno detectan e informan los parámetros de entorno del ambiente en el que se encuentra el dispositivo. La disponibilidad de cada sensor en particular depende sólo del fabricante del dispositivo. El sensor de luz ambiente (ALS) está disponible en muchas tabletas Android.

    2.6.1 Sensor de luz ambiente (ALS)

    El sistema usa el sensor de luz ambiente, que se describe en la Tabla 4.7, para detectar la iluminación del entorno y ajustar automáticamente el brillo de la pantalla según lo detectado.

    Tabla 4.7    El sensor de luz ambiente        Fuente: Intel Corporation, 2012

    SensorTipoDatos de SensorEvent (lx)Descripción
    Sensor de luz ambienteTYPE_LIGHTvalues[0]Iluminación cerca del dispositivo

    En el Ejemplo de código 8 se muestra cómo crear una instancia del sensor de luz ambiente.

        ... 
        private Sensor mALS;
        private SensorManager mSensorManager; 
      
        ... 
            mSensorManager = (SensorManager)context.getSystemService(Context.SENSOR_SERVICE);
            mALS = mSensorManager.getDefaultSensor(Sensor.TYPE_LIGHT);
        ...

    Ejemplo de código 8: Creación de una instancia de sensor de luz ambiente**
    Fuente: Intel Corporation, 2012

    2.7 Pautas de optimización y rendimiento de sensores

    Para usar sensores en sus aplicaciones, es aconsejable que siga las siguientes pautas:

    • Antes de usar un sensor específico, siempre compruebe que esté disponible
      La plataforma Android no exige la inclusión o exclusión de sensores específicos en el dispositivo. Los sensores que se incluyen es algo que decide el fabricante del dispositivo en forma exclusiva. Antes de usar un sensor en su aplicación, siempre compruebe primero que esté disponible.
    • Siempre cancele el registro de los procesos de escucha del sensor
      Si la actividad que implementa el proceso de escucha del sensor se vuelve invisible, o si el diálogo se detiene, cancele el registro del proceso de escucha del sensor. Se puede hacer con el método onPause() de la actividad o el método onStop() del diálogo. Si no cumple con esta pauta, el sensor continuará adquiriendo datos y, como consecuencia, agotará la batería.
    • No bloquee el método onSensorChanged()
      El sistema llama con frecuencia al método onSensorChanged() para informar datos de sensor. Debe haber la menor cantidad de lógica posible dentro de este método. Los cálculos complicados con datos del sensor se deben mover hacia fuera de este método.
    • Siempre pruebe en dispositivos reales las aplicaciones que usen sensores
      Todos los sensores que se describen en esta sección son de tipo hardware. Es posible que el emulador de Android no sea suficiente para simular las funciones y el rendimiento de los sensores.

    3 GPS y ubicación

    El Sistema de Posicionamiento Global (Global Positioning System, GPS) es un sistema satelital que proporciona información precisa de ubicación geográfica en todo el mundo. Está disponible en una gran variedad de tabletas Android. En muchos aspectos, se comporta como un sensor de posición. Puede proporcionar datos precisos de ubicación para las aplicaciones que se ejecutan en el dispositivo. En la plataforma Android, el marco de trabajo de sensores no maneja el GPS de manera directa. Lo que ocurre, en cambio, es que el servicio de localización Android accede a los datos del GPS y los transfiere a las aplicaciones a través de las devoluciones de llamada de los procesos de escucha de ubicación.

    3.1 Servicios de localización de Android

    Usar el GPS no es la única manera de obtener la información de ubicación en los dispositivos Android. El sistema también puede utilizar Wi-Fi*, redes celulares y otras redes inalámbricas para hacerse con la ubicación actual del dispositivo. El GPS y las redes inalámbricas (incluidas las Wi-Fi y las celulares) actúan como “proveedores de ubicación” para los servicios de localización de Android. En la Tabla 4.8 se incluye una lista de las clases e interfaces principales que se usan para acceder a los servicios de localización de Android:

    Tabla 4.8    El servicio de localización de la plataforma Android        Fuente: Intel Corporation, 2012

    NombreTipoDescripción
    LocationManagerClaseSe usa para acceder a los servicios de localización. Proporciona diversos métodos para solicitar actualizaciones periódicas de ubicación para una aplicación, o para enviar alertas de proximidad.
    LocationProviderClase abstractaEs la supercalse abstracta para proveedores de ubicación.
    LocationClaseLos proveedores de ubicación la utilizan para encapsular datos geográficos.
    LocationListenerInterfazSe usa para recibir notificaciones de LocationManager.

    3.2 Cómo obtener actualizaciones de ubicación por GPS

     

    De manera similar al mecanismo de usar el marco de trabajo de sensores para acceder a datos de sensores, la aplicación implementa varios métodos de devolución de llamada definidos en la interfaz LocationListener para recibir actualizaciones de ubicación por GPS. LocationManager envía notificaciones de actualización de GPS a la aplicación por medio de estas devoluciones de llamada (la regla “No nos llame, nosotros lo llamaremos”).

    Para acceder a los datos de ubicación GPS de la aplicación, es necesario que solicite el permiso de acceso a ubicación precisa en su archivo de manifiesto de Android (Ejemplo de código 9).

    <manifest ...>
    ...
        <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"...  
    </manifest>

    Ejemplo de código 9: Cómo solicitar el permiso de acceso a ubicación precisa en el archivo de manifiesto**
    Fuente: Intel Corporation, 2012

    En el Ejemplo de código 10 se muestra cómo obtener actualizaciones del GPS y mostrar las coordenadas de latitud y longitud en una vista de texto de diálogo.

    package com.intel.deviceinfo;
      
    import android.app.Dialog;
    import android.content.Context;
    import android.location.Location;
    import android.location.LocationListener;
    import android.location.LocationManager;
    import android.os.Bundle;
    import android.widget.TextView;
      
    public class GpsDialog extends Dialog implements LocationListener {
        TextView mDataTxt;
        private LocationManager mLocationManager;
           
        public GpsDialog(Context context) {
            super(context);
            mLocationManager = (LocationManager)context.getSystemService(Context.LOCATION_SERVICE);
        }
      
        @Override
        protected void onCreate(Bundle savedInstanceState) {
            super.onCreate(savedInstanceState);
                 mDataTxt = (TextView) findViewById(R.id.sensorDataTxt);
              mDataTxt.setText("...");
                 
            setTitle("Gps Data");
        }
           
        @Override
        protected void onStart() {
            super.onStart();
            mLocationManager.requestLocationUpdates(
                LocationManager.GPS_PROVIDER, 0, 0, this);
        }
                 
        @Override
        protected void onStop() {
            super.onStop();
            mLocationManager.removeUpdates(this);
        }
      
        @Override
        public void onStatusChanged(String provider, int status, 
            Bundle extras) {
        }
      
        @Override
        public void onProviderEnabled(String provider) {
        }
      
        @Override
        public void onProviderDisabled(String provider) {
        }
      
        @Override
        public void onLocationChanged(Location location) {
            StringBuilder dataStrBuilder = new StringBuilder();
            dataStrBuilder.append(String.format("Latitude: %.3f,   Logitude%.3fn", location.getLatitude(), location.getLongitude()));
            mDataTxt.setText(dataStrBuilder.toString());
                 
        }
    }

    Ejemplo de código 10: Diálogo que muestra los datos de ubicación del GPS**
    Fuente: Intel Corporation, 2012

    3.3 Pautas de optimización y rendimiento del GPS y la localización

    El GPS proporciona la información de ubicación más exacta del dispositivo. Sin embargo, al ser una prestación de hardware, consume energía adicional. Por otra parte, al GPS le lleva tiempo obtener sus primeros datos de ubicación. Las siguientes son algunas pautas que se deben seguir al desarrollar aplicaciones que utilicen el GPS y datos de ubicación:

    • Considere todos los proveedores posibles
      Además de GPS_PROVIDER, está NETWORK_PROVIDER. Si sus aplicaciones sólo necesitan los datos de ubicación aproximada, puede considerar el uso de NETWORK_PROVIDER.
    • Use las ubicaciones guardadas en el caché
      Al GPS le lleva tiempo obtener sus primeros datos de ubicación. Cuando la aplicación está esperando que el GPS obtenga una actualización de ubicación precisa, puede usar primero las ubicaciones que proporciona el método LocationManager’s getlastKnownLocation() para realizar parte del trabajo.
    • Reduzca al mínimo la frecuencia y la duración de las solicitudes de actualización de ubicación
      Debe solicitar la solicitud de ubicación sólo cuando sea necesario y cancelar el registro del administrador de ubicación cuando ya no necesite las actualizaciones.

    4. Resumen

    La plataforma Android proporciona interfaces de programación de aplicaciones (API) para que los desarrolladores accedan a los sensores integrados de los dispositivos. Estos sensores son capaces de proporcionar datos sin procesar acerca del movimiento, la posición y las condiciones de entorno del ambiente actuales del dispositivo con gran precisión. Al desarrollar aplicaciones que usen sensores, debe seguir los procedimientos recomendados para mejorar el rendimiento y aumentar la eficiencia.


    Aviso de optimización

    Los compiladores de Intel pueden o no optimizar al mismo grado para microprocesadores que no sean de Intel en el caso de optimizaciones que no sean específicas para los microprocesadores de Intel. Entre estas optimizaciones se encuentran las de los conjuntos de instrucciones SSE2, SSE3 y SSE3, y otras. Intel no garantiza la disponibilidad, la funcionalidad ni la eficacia de ninguna optimización en microprocesadores no fabricados por Intel.

    Las optimizaciones de este producto que dependen de microprocesadores se crearon para utilizarlas con microprocesadores de Intel. Ciertas optimizaciones no específicas para la microarquitectura de Intel se reservan para los microprocesadores de Intel. Consulte las guías para el usuario y de referencia correspondientes si desea obtener más información relacionada con los conjuntos de instrucciones específicos cubiertos por este aviso.

    Notice revision #20110804

  • Intel for Android Developers Learning Series
  • Developers
  • Partners
  • Professors
  • Students
  • Android*
  • Android*
  • Advanced
  • Beginner
  • Intermediate
  • Sensors
  • Phone
  • Tablet
  • URL
  • Flight with the Navigator | Making “The Big Change” – Again, E4

    $
    0
    0

     

    The idea of a smartphone that doesn't require a data plan is neither mine alone nor a new concept to me. I found Scratch* Wireless because I was looking for the service. So during the quest dear daughter number one mentioned a must-have on her always with her portable device - Navigation.

    Last summer I did some research and landed on Sygic-GPS Navigation. Sorry this means I don't recall the complete decision process nor the reviewers. This past research did make putting a Nav-App on the phone easier since it was already on my cloud of purchased apps.

    Why this Navigator:
    1) It was already in my app list
    2) Works w/o a data connection, still needs GPS receivers running of course
    3) Good/Satisfactory Reviews
    4) TomTom* Maps - I'd rather pay for the "known" maps today. There are open/free map sources, may still try one a nav-app for one of those before buying maps for the family
    5) 7 day trial, might be usage days or device days because mine has a date pretty far out there.  Only used it on the tablet once - there was an awful lot of issues with it staying on my trip - maybe my tablet has bad satellite reception?
    6) Turn-by-turn voice directions - The Garmin* got us accustomed to this idea 15 years ago, a real must for automotive navigation.

    The second impression:
    This app is very Timilizable and enough like my previous nav-tools that I didn't have any problem getting it pointed toward my house. I did have to run through some selection screens like language, etc. upon first launch. One nice feature is you can download maps one continent, country, or state at a time. Product options currently include North America, United States, Canada, and a long list of other countries. Bonus of having TomTom maps as
    their partner I'll guess. I live in one state, I downloaded the ~60MB file for just my state - over WiFi!

    The default user data entry path to get your destination into the app is not my favorite model, but I have seen it both on one of my earlier Garmin's* as well as the app we put on our first iPhone. The flow is just a little too regimented for me. Essentially you must fill a separate field for each detailed line in an address "backwards" from how you'd put it on an envelope. I do hope there's an option to have the app search for an address I type in like Google* maps and the iPhone* maps allow. What do I mean?

    Here's the process:
    USA <TAB> State <TAB> City <TAB> Street Number <TAB> Street <TAB> Zip Code <SEARCH> <SCROLL><SELECT>

    That's a lot of opportunity for fumble fingering even with the real keyboard. Not a deal breaker though. Put the phone in the windshield holder and started driving. About the time I left the parking lot she started telling me where to go, so I have named her - probably inappropriate for this blog - so imagine what I would call a women who tells me where to go. Directions were clear, accurate, and timely enough to be helpful. Graphics look good and track well with the journey. Soon though, I noticed the speed limit graphic said "40" in a 25 MPH zone. I was bummed, thought the map data was wrong. Then I looked at the units - defaults to Metric. 3 stoplights and 6-10 screens later I found the setting and changed it to more familiar to me standard units. In addition to miles it gives the options of feet or yards. New feature to me and nice, if I ever want to change it though, might want to do that at home. She continued to guide me through the countryside giving me audible & visual warnings of railroad crossings and larger intersections. Somehow she KNEW the stretch where I had a head-on collision 15 years ago and warned me there too. I guess she may have been highlighting the intersection of 2 highways, but with all Edward Snowden knows I'm sure she's got more on me than my home address.

    Ms. Smartie-pants GPS Navigator's route to my house was not the exact route I take, so I strayed. She re-aligned her path and displayed what looked like my way. About 1 mile into it she told me to take a safe U-Turn. Oh well, she eventually figured it out. I made it home and the Sygic-GPS Navigation knew where I was all the way down to a pretty accurate altitude. She and whatever else was running on the phone used about 15% of the battery in a 45 minute drive. Contributors to battery drain include screen staying on the whole drive, and nav-app requiring I turn off airplane-mode - I had earlier discovered this phone allows me to turn off the radios that connect to the cellular towers and run just WiFi or Bluetooth or both. However, seems the turn-by-turn directions or the phone need airplane-mode off (aka all radios available) to have the GPS receivers running. Turn-by-turn GPS Navigation eats my iPhone battery too, so I'm very satisfied with this experience. Since the maps are on the phone and if I had a navigator it would be pretty easy to setup a trip while stationary, turn off the radios, and have my human assistant touchscreen me through the journey old-school, saving power. Then again, I could just plug the thing into the aux power port (cigarette lighter for you old timers).

    This nav-app may be the thing, device and service are feeling more and more like something my kids and I can certainly use. Jury is still out on the less tech-tolerant. There are many Timizable options in Android - might be just a few too many for some. I'm more than OK with this experience so far.

    2 days of use, still no spending on service. Voicemail worked - didn't hear any ringing though. Voicemail delivered in e-mail. This can work.

    References:
    Sygic <http://www.sygic.com/en/gps-navigation?r=topmenu>
    Garmin <http://www.garmin.com/en-US>
    TomTom <http://www.tomtom.com/en_us/>
    Scratch Wireless <http://www.scratchwireless.com>

     

     

    Icon Image: 

  • Education
  • Geolocation
  • Mobility
  • Sensors
  • User Experience and Design
  • Android*
  • Phone
  • Developers
  • Students
  • Android*
  • Krita* Gemini* - Twice as Nice on a 2-in-1

    $
    0
    0

    Download PDF

    Why 2-in-1

    A 2 in 1 is a PC that transforms between a laptop computer and a tablet. Laptop mode (sometimes referred to as desktop mode) allows a keyboard and mouse to be used as the primary input devices. Tablet mode relies on the touchscreen, thus requiring finger or stylus interaction. A 2 in 1, like the Intel® Ultrabook™ 2 in 1, offers precision and control with multiple input options that allow you to type when you need to work and touch when you want to play.

    Developers have to consider multiple scenarios in modifying their applications to take advantage of this new type of transformable computer. Some applications may want to keep the menus and appearance nearly identical in both modes. While others, like Krita Gemini for Windows* 8 (Reference 1), will want to carefully select what is highlighted and made available in each user interface mode. Krita is a program for sketching and painting, that offers an end-to-end solution for creating digital painting files from scratch (Reference 2). This article will discuss how the Krita developers added 2 in 1 mode-awareness - including implementation of both automatic and user-selected mode switching and some of the areas developers should consider when creating applications for the 2 in 1 experience to their applications.

    Introduction

    Over the years, computers have used a variety of input methods, from punch cards to command lines to point-and-click. With the adoption of touch screens, we can now point-and-click with a mouse, stylus, or fingers. Most of us are not ready to do everything with touch, and with mode-aware applications like Krita Gemini, we don’t have to. 2 in 1s, like an Intel® Ultrabook™ 2 in 1, can deliver the user interface mode that gives the best experience possible, on one device.

    There are multiple ways that a 2 in 1 computer can transform between laptop and tablet modes (Figure 1 & Figure 2). There are many more examples of 2 in 1 computers on the Intel website (Reference 3). The computer can transform into tablet-mode from laptop-mode by detaching the screen from the keyboard or using another means to disable the keyboard and make the screen the primary input device (such as folding the screen on top of the keyboard). Computer manufacturers are beginning to provide this hardware transition information to the operating system. The Windows* 8 API event, WM_SETTINGCHANGE and the “ConvertibleSlateMode” text parameter, signal the automatic laptop to tablet and back to laptop mode changes. It is also a good idea for developers to include a manual mode change button for users’ convenience as well.

    Just as there are multiple ways that the 2 in 1 can transform between laptop and tablet modes, software can be designed in different ways to respond to the transformation. In some cases it may be desirable to keep the UI as close to the laptop mode as possible, while in other cases you may want to make more significant changes to the UI. Intel has worked with many vendors to help them add 2 in 1 awareness to their applications. Intel helped KO GmBH combine the functionality of their Krita Touch application with their popular Krita open source painting program (laptop application) in the new Krita Gemini application. The Krita project is an active development community, welcoming new ideas and maintaining quality support. The team added the mechanisms required to provide seamless transition from the laptop “mouse and keyboard” mode to the touch interface for tablet mode. See Krita Gemini’s user interface (UI) transformations in action in the short video in Figure 3.


    Figure 3: Video - Krita Gemini UI Change – click icon to run

    Create in Tablet Mode, Refine in Laptop Mode

    The Gemini team set out to maximize the user experience in the two modes of operation. In Figure 4 & Figure 5 you can see that the UI changes from one mode to the other are many and dramatic. This allows the user to seamlessly move from drawing “in the field” while in tablet mode to touch-up and finer detail work when in laptop mode.


    Figure 4:Krita Gemini tablet user interface


    Figure 5: Krita Gemini laptop user interface

    There are three main steps to making an application transformable between the two modes of operation.

    Step one; the application must be touch aware. We were somewhat lucky in that the touch-aware step was started well ahead of the 2 in 1 activity. Usually this is a heavier lift than the transition to and from tablet mode work. Intel has published articles on adding touch input to a Windows 8 application (Reference 4).

    Step two, add 2 in 1 awareness. The first part of the video (Figure 3) above demonstrates the automatic, sensor activated mode change, a rotation in this case (Figure 6). After that the user-initiated transition via a button in the application is shown (Figure 7).


    Figure 6:Sensor-state activated 2 in 1 mode transition


    Figure 7:Switch to Sketch transition button – user initiated action for laptop to tablet mode

    Support for automatic transitions requires the sensor state to be defined, monitored, and appropriate actions to be taken once the state is known. In addition, a user initiated mode transition3 should be included as a courtesy to the user should she wish to be in the tablet mode when the code favors laptop mode. You can reference the Intel article “How to Write a 2-in-1 Aware Application” for an example approach to adding the sensor-based transition (Reference 5). Krita’s code for managing the transitions from one mode to the other can be found in their source code by searching for “SlateMode” (Reference 6). Krita is released under a GNU Public License. Please refer to source code repository for the latest information (Reference 7).

    // Snip from Gemini - Define 2-in1 mode hardware states:

    #ifdef Q_OS_WIN
    #include <shellapi.h>
    #define SM_CONVERTIBLESLATEMODE 0x2003
    #define SM_SYSTEMDOCKED 0x2004
    #endif

    Not all touch-enabled computers offer the automatic transition, so we suggest you do as the Krita Gemini team did here and include a button in your application to allow the user to manually initiate the transition from one mode to the other. Gemini’s button is shown in Figure 7. The button-initiated UI transition performs the same functions as the mechanical-sensor-initiated transition. The screen information and default input device will change from touch & large icons in tablet mode to keyboard, mouse and smaller icons in the laptop mode. However, since the sensor path is not there the button method must perform the screen, icon, and default input device changes without the sensor-state information. Therefore, developers should provide a path for the user to change from one mode to the other with touch or mouse regardless of the state of the button-initiated UI state in case the user chooses an inappropriate mode.

    The button definition - Kaction() - as well as its states and actions are shown in the code below (Reference 6):

    // Snip from Gemini - Define 2-in1 Mode Transition Button:
    
             toDesktop = new KAction(q);
             toDesktop->setEnabled(false);
             toDesktop->setText(tr("Switch to Desktop"));
    SIGNAL(triggered(Qt::MouseButtons,Qt::KeyboardModifiers)), q, SLOT(switchDesktopForced()));
             connect(toDesktop,
    SIGNAL(triggered(Qt::MouseButtons,Qt::KeyboardModifiers)), q, SLOT(switchToDesktop()));
    sketchView->engine()->rootContext()->setContextProperty("switchToDesktop
    sketchView->Action", toDesktop);

    Engineers then took on the task of handling the events triggered by the button. Checking the last known state of the system first since the code cannot assume it is on a 2-in-1 system, then changing the mode. (Reference 6):

    // Snip from Gemini - Perform 2-in1 Mode Transition via Button:
    
    #ifdef Q_OS_WIN
    bool MainWindow::winEvent( MSG * message, long * result ) {
         if (message && message->message == WM_SETTINGCHANGE && message->lParam)
         {
             if (wcscmp(TEXT("ConvertibleSlateMode"), (TCHAR *) message->lParam) == 0)
                 d->notifySlateModeChange();
             else if (wcscmp(TEXT("SystemDockMode"), (TCHAR *) 
    message->lParam) == 0)
                 d->notifyDockingModeChange();
             *result = 0;
             return true;
         }
         return false;
    }
    #endif
    
    void MainWindow::Private::notifySlateModeChange()
    {
    #ifdef Q_OS_WIN
         bool bSlateMode = (GetSystemMetrics(SM_CONVERTIBLESLATEMODE) == 0);
    
         if (slateMode != bSlateMode)
         {
             slateMode = bSlateMode;
             emit q->slateModeChanged();
             if (forceSketch || (slateMode && !forceDesktop))
             {
                 if (!toSketch || (toSketch && toSketch->isEnabled()))
                     q->switchToSketch();
             }
             else
             {
                     q->switchToDesktop();
             }
             //qDebug() << "Slate mode is now"<< slateMode;
         }
    #endif
    }
    
    void MainWindow::Private::notifyDockingModeChange()
    {
    #ifdef Q_OS_WIN
         bool bDocked = (GetSystemMetrics(SM_SYSTEMDOCKED) != 0);
    
         if (docked != bDocked)
         {
             docked = bDocked;
             //qDebug() << "Docking mode is now"<< docked;
         }
    #endif
    }

    Step three, fix issues discovered during testing. While using the palette in touch or mouse mode is fairly easy, the workspace itself needs to hold focus and zoom consistent with the user’s expectations. Therefore, making everything bigger was not an option. Controls got bigger for touch interaction in tablet mode, but the screen image itself needed to be managed at a different level as to keep an expected user experience. Notice in the video (Figure 3) the image in the edit pane stays the same size on the screen from one mode to the other. This was an area that took creative solutions from the developers to reserve screen real estate to hold the image consistent. Another issue was that an initial effort had both UIs running which adversely affected performance due to both UIs sharing the same graphics resources. Adjustments were made in both UIs to keep the allotted resource requirements as distinct as possible and prioritize the active UI wherever possible.

    Wrap-up

    As you can see, adding 2 in 1 mode awareness to your application is a pretty straightforward process. You need to look at how your users will interact with your application when in one interactive mode versus the other. Read the Intel article “Write Transformational Applications for 2 in 1 Devices Based on Ultrabook™ Designs“ for more information on creating an application with a transforming user interface (Reference 8). For Krita Gemini, the decision was made to make creating drawings and art simple while in tablet mode and add the finishing touches to those creations while in the laptop mode. What can you highlight in your application when presenting it to users in tablet mode versus laptop mode?

    References

    1. Krita Gemini General Information
    2. Krita Gemini executable download (scroll to Krita Gemini link)
    3. Intel.com 2 in 1 information page
    4. Intel Article: Mixing Stylus and Touch Input on Windows* 8 by Meghana Rao
    5. Intel Article: How to Write a 2-in-1 Aware Application by Stephan Rogers
    6. Krita Gemini mode transition source code download
    7. KO GmbH Krita Gemini source code and license repository
    8. Intel® Developer Forum 2013 Presentation by Meghana Rao (pdf) - Write Transformational Applications for 2 in 1 Devices Based on Ultrabook™ Designs
    9. Krita 2 in 1 UI Change Video on IDZ or YouTube*

    About the Author

    Tim Duncan is an Intel Engineer and is described by friends as “Mr. Gidget-Gadget.” Currently helping developers integrate technology into solutions, Tim has decades of industry experience, from chip manufacturing to systems integration. Find him on the Intel® Developer Zone as Tim Duncan (Intel)

     

    Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
    Copyright © 2013-2014 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

     

  • 2-in-1
  • Krita Gemini
  • Developers
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • UX
  • Windows*
  • Game Development
  • Graphics
  • Sensors
  • Touch Interfaces
  • User Experience and Design
  • Laptop
  • Tablet
  • URL
  • MWC 2014 and the Internet of Things Hackathon

    $
    0
    0

    On the same day that we announced the launch of our new IoT developer program at Mobile World Congress, we kicked off two sets of hackathons in Barcelona. With fifty developers each day,each with an Intel(r) Galileo board,  a live USB, a wifi adapter, a set of cables and some sensors this was a pilot for the series of twenty IoT hackathons being planned for later in the year.

    The before and after image below shows the large pile of boxes that arrived from various vendors and destinations.  These were assembled on site to create the Development Kits, and distributed to the attendees each day. Individual developers and teams from local universities worked their way from the blinking led example to actually sending sensor data to the cloud, also known as the IoT Analytics Platform as a Service. The after image shows what one team built using two Galileo boards: a servo controlled car with direction tracking camera.

    before and after at mwc hackathon

    On Wednesday,after a day or two of hacking the developers were invited back to give the Intel Team some feedback on their experience.  More than 65% of the developers came back to help us improve the Developer Kit, with "constructive feedback". Luckily most of the cursing was in Spanish, as befits an event in Barcelona, and following our social media policy of "good, or bad, but not ugly" all of the participants input was gathered. Despite the anticipated teething problems, and even some devkit components not clearing customs until late on day one afternoon, everyone seemed to have a good time. 

    Hacking the hackathon.

    Our null modem cables were delayed by airport customs, and did not arrive until 16:30 on our first day, but in true hackathon fashion the developers used the kit of small jumper cables we supplied to construct a pin-swapper to get the serial port data from the Galileo board. One by one this solution rippled from team to team, like a real world viral video, and soon every team had a "community developed" fix to the problem. It wasn't very robust but it was functional and epitomized the lessons I've learned from participating in several local maker hackathons. Step one:- get something working that you can demonstrate.  Step two:- there is no step two.

    If you are a maker, and can build something using hot glue and duct tape, I hope to meet you at our upcoming series of hackathons.

     

     

    Icon Image: 

  • Cloud Computing
  • Open Source
  • Sensors
  • C/C++
  • Internet of Things
  • Developers
  • Professors
  • Students
  • Arduino
  • Linux*
  • Yocto Project
  • Viewing all 82 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>