Case Study

The Album Application

In the mobile version of the AIR application, the user takes a picture or pulls one from the camera roll. She can save it to a dedicated database, along with an audio caption and geolocation information. The group of saved images is viewable in a scrollable menu. Images can be sent from the device to the desktop via a wireless network.

In the desktop version, the user can see an image at full resolution on a large screen. It can be saved to the desktop, and it can also be edited and uploaded to a photo service.

Please download the two applications from this book’s website at http://oreilly.com/ catalog/9781449394820.

Design

The design is simple, using primary colors and crisp type. The project was not developed for flexible layout. It was created at 800×480 resolution with auto-orientation turned off; you can use it as a base from which to experiment developing for other resolutions. The art is provided in a Flash movie to use in Flash Professional or as an .swc file to import into Flash Builder by selecting Properties→ActionScript Build Path→Library Path and clicking on “Add swc.”

Architecture

The source code consists of the Main document class and the model, view, and events packages (see Figure 17-1).

The model package contains the AudioManager, the SQLManager, the GeoService, and the PeerService. The view package contains the NavigationManager and the various views. The events package contains the various custom events.

The SQLManager class is static, so it can be accessed from anywhere in the application without instantiation. The other model classes are passed by reference.

Flow

The flow of the application is straightforward. The user goes through a series of simple tasks, one step at a time. In the opening screen, the OpeningView, the user can select a new picture or go to the group menu of images, as shown in Figure 17-2.

From the AddView page, the user can open the Media Gallery or launch the camera, as shown in Figure 17-3. Both choices go to the same CameraView view. An id parameter is passed to the new view to determine the mode.

The image data received from either of the sources is resized to fit the dimensions of the stage. The user can take another picture or accept the current photograph if satisfied. The image url is sent to the SQLManager to store in its class variable current Photo of type Object, and the application goes to the CaptionView.

In the CaptionView, the user can skip the caption-recording step or launch the Audio Manager. The recording is limited to four seconds and automatically plays the caption sound back. The user can record again or choose to keep the audio. The AudioMan ager compresses the recording as a WAV file and saves it on the SD card. Its url is saved in the SQLManager’s currentPhoto object. The next step is to add geographic information.

Figure 17-1. Packages and classes for the Album application
Figure 17-1. Packages and classes for the Album application
Figure 17-2. The OpeningView
Figure 17-2. The OpeningView

In the GeoView, the user can skip or launch the GeoService. This service creates an instance of the GeoLocation, fetches coordinates, and then requests the corresponding city and country from the Yahoo! API. As in the previous steps, the geodata is saved in the SQLManager’s currentPhoto object. These three steps are shown in Figure 17-4.

Figure 17-3. The AddView, native camera application, and Media Gallery
Figure 17-3. The AddView, native camera application, and Media Gallery
Figure 17-4. The CameraView, CaptionView, and GeoView
Figure 17-4. The CameraView, CaptionView, and GeoView

In the SavingView mode, data saving can be skipped or the data can be saved. For the latter, the SQLManager opens an SQL connection and saves the data, then closes the connection. The application goes back to the OpeningView.

Back at our starting point, another navigation choice is the Group menu. The Menu View page requests the number of images saved for the SQLManager and displays it as a list of items. If the list height is taller than the screen, it becomes scrollable. Selecting one of the items takes the user to the PhotoView screen. The SavingView page and MenuView page are shown in Figure 17-5.

The PhotoView displays the image selected in the MenuView. Choosing to connect calls the PeerService to set up a P2P connection using the WiFi network. Once it is established, the data is requested from the SQLManager using the item ID. The data is then sent. It includes the byteArray from the image, a WAV file for the audio, and the city and country as text. These steps are displayed in Figure 17-6.

Figure 17-5. The SavingView and MenuView
Figure 17-5. The SavingView and MenuView
Figure 17-6. The PhotoView and the steps to send the picture information using a P2P connection
Figure 17-6. The PhotoView and the steps to send the picture information using a P2P connection

Permissions

This application needs the following permissions to access the Internet, write to the SD card, and access GPS sensors, the camera, and the microphone:

[code]

<android>
<manifestAdditions>
<![CDATA[
<manifest>
<uses-permission
android:name=”android.permission.INTERNET”/>
<uses-permission
android:name=”android.permission.WRITE_EXTERNAL_STORAGE”/>
<uses-permission
android:name=”android.permission.ACCESS_FINE_LOCATION”/>
<uses-permission
android:name=”android.permission.ACCESS_COARSE_LOCATION”/>
<uses-permission
android:name=”android.permission.CAMERA”/>
<uses-permission
android:name=”android.permission.RECORD_AUDIO”/>
</manifest>
]]>
</manifestAdditions>
</android>

[/code]

Navigation

The ViewManager class discussed here is almost identical. The flow is a step-by-step process whereby the user can choose to skip the steps that are optional.

Images

The CameraView is used to get an image, either by using the media library or by taking one using the camera. The choice is based on a parameter passed from the previous screen. The process of receiving the bytes, scaling, and displaying the image is the same regardless of the image source. It is done by a utility class called BitmapDataSizing and is based on the dimensions of the screen.

To improve this application, check if an image is already saved when the user selects it again to avoid duplicates.

Audio

The audio caption is a novel way to save a comment along with the image. There is no image service that provides the ability to package an audio commentary, but you could build such an application.

WAV files using the Adobe class WAVReader and then extracted using a third-party library. Here, we create an Album directory on the SD card and a mySounds directory inside it to store the WAV files.

Reverse Geolocation

the process of using geographical coordinates to get an address location such as a city and a street address.

In this application, we are only interested in the city name and country. Therefore, coarse data is sufficient. We do not need to wait for the GPS data to stabilize. As soon as we get a response for the Yahoo! service, we move on to the next step.

SQLite

SQLManager is a static class, so it can be accessed from anywhere in the application. The Main class holds an object which stores information related to a photo until it is complete and ready to be saved:

[code]

var currentPhoto:Object = {photo:””, audio:””, geo:””};

[/code]

The photo property stores the path to where the image is saved in the Gallery. The audio property stores the path to where the WAV file is located and the geo property stores a string with city and country information.

From the SavingView view, the object is saved in the myAlbum.db file on the SD card.

P2P Connection

The peer-to-peer connection is used to send the image, audio caption, and location over a LAN. This example is to demonstrate the potential of what you can do more than a proper use case because the transfer is slow, unless the information is sent in packets and reassembled. This technology is feasible for fairly small amounts of data and has a lot of potential for gaming and social applications.

Once the user has selected an image, she can transfer it to a companion desktop application from the SavingView view. The PeerService class handles the communication to the LAN and the posting of data.

Scrolling Navigation

The MenuView that displays the images saved in the database has scrolling capability if the content is larger than the height of the screen.

There are two challenges to address. The first is the performance of a potentially large list to scroll. The second is the overlapping interactive objects. The scrollable list contains elements that also respond to a mouse event. Both functionalities need to work without conflicting.

We only need to scroll the view if the container is larger than the device height, so there is no need to add unnecessary code. Let’s check its dimensions in the onShow method:

[code]

function onShow():void {
deviceHeight = stage.stageHeight;
container = new Sprite();
addChild(container);
// populate container
if (container.height > deviceHeight) {
trace(“we need to add scrolling functionality”);
}
}

[/code]

If the container sprite is taller than the screen, let’s add the functionality to scroll. Touch events do not perform as well as mouse events. Because we only need one touch point, we will use a mouse event to detect the user interaction. Note that we set cacheAsBit map to true on the container to improve rendering:

[code]

function onShow():void {
if (container.height > deviceHeight) {
container.cacheAsBitmap = true;
stage.addEventListener(MouseEvent.MOUSE_DOWN,
touchBegin, false, 0, true);
stage.addEventListener(MouseEvent.MOUSE_UP,
touchEnd, false, 0, true);
}
}

[/code]

To determine if the mode is to scroll or to click an element that is part of the container, we start a timeout. We will see later why we need this timer in relation to the elements:

[code]

import flash.utils.setTimeout;
var oldY:Number = 0.0;
var newY:Number = 0.0;
var timeout:int;
function touchBegin(event:MouseEvent):void {
oldY = event.stageY;
newY = event.stageY;
timeout = setTimeOut(startMove, 400);
}

[/code]

When the time expires, we set the mode to scrollable by calling the startMove method. We want to capture the position change on MOUSE_MOVE but only need to render the change to the screen on ENTER_FRAME. This guarantees a smoother and more consistent motion. UpdateAfterEvent should never be used in mobile development because it is too demanding for devices:

[code]

function startMove(event:MouseEvent):void {
stage.addEventListener(MouseEvent.MOUSE_MOVE,
touchMove, false, 0, true);
stage.addEventListener(Event.ENTER_FRAME, frameEvent, false, 0, true);
}

[/code]

When the finger moves, we update the value of the newY coordinate:

[code]

function touchMove(event:MouseEvent):void {
newY = event.stageY;
}

[/code]

On the enterFrame event, we render the screen using the new position. The container is moved according to the new position. To improve performance, we show and hide the elements that are not in view using predefined bounds:

[code]

var totalChildren:int = container.numChildren;
var topBounds:int = -30;
function frameEvent(event:Event):void {
if (newY != oldY) {
var newPos = newY – oldY;
oldY = newY;
container.y += newPos;
for (var i:int = 0; i < totalChildren; i++) {
var mc:MovieClip = container.getChildAt(i) as MovieClip;
var pos:Number = container.y + mc.y;
mc.visible = (pos > topBounds && pos < deviceHeight);
}

}
}

[/code]

On touchEnd, the listeners are removed:

[code]

function touchEnd(event:MouseEvent):void {
stage.removeEventListener(MouseEvent.MOUSE_MOVE, touchMove);
stage.removeEventListener(Event.ENTER_FRAME, frameEvent);
}

[/code]

As mentioned before, elements in the container have their own mouse event listeners:

[code]

element.addEventListener(MouseEvent.MOUSE_DOWN, timeMe, false, 0, true);
element.addEventListener(MouseEvent.MOUSE_UP, clickAway, false, 0, true);

[/code]

On mouse down, the boolean variable isMoving is set to false and the visual cue indicates that the element was selected:

[code]

var isMoving:Boolean = false;
var selected:MovieClip;
function timeMe():void {
isMoving = false;
selected = event.currentTarget as MovieClip;
selected.what.textColor = 0x336699;
}

[/code]

On mouse up and within the time allowed, the stage listeners and the timeout are removed. If the boolean isMoving is still set to false and the target is the selected item, the application navigates to the next view:

[code]

function clickAway(event:MouseEvent):void {
touchEnd(event);
clearTimeOut(timeout);
if (selected == event.currentTarget && isMoving == false) {
dispatchEvent(new ClickEvent(ClickEvent.NAV_EVENT,
{view:”speaker”, id:selected.id}));
}

[/code]

Now let’s add to the frameEvent code to handle deactivating the element when scrolling. Check that an element was pressed, and check that the selected variable holds a value and that the motion is more than two pixels. This is to account for screens that are very responsive. If both conditions are met, change the boolean value, reset the look of the element, and set the selected variable to null:

[code]

function frameEvent(event:Event):void {
if (newY != oldY) {
var newPos = newY – oldY;
oldY = newY;
container.y += newPos;
for (var i:int = 0; i < totalChildren; i++) {
var mc:MovieClip = container.getChildAt(i) as MovieClip;
var pos:Number = container.y + mc.y;
mc.visible = (pos > topBounds && pos < deviceHeight);
}
if (selected != null && Math.abs(newPos) > 2) {
isMoving = true;
selected.what.textColor = 0x000000;
selected = null;
}
}
}

[/code]

There are various approaches to handle scrolling. For a large number of elements, the optimal way is to only create as many element containers as are visible on the screen and populate their content on the fly. Instead of moving a large list, move the containers as in a carousel animation and update their content by pulling the data from a Vector or other form of data content.

If you are using Flash Builder and components, look at the Adobe lighthouse package (http://www.adobe.com/devnet/devices/fpmobile.html). It contains DraggableVertical Container for display objects and DraggableVerticalList for items.

Desktop Functionality

The AIR desktop application, as shown in Figure 17-7, is set to receive the data and display it. Seeing a high resolution on a large screen demonstrates how good the camera quality of some devices can be. The image can be saved on the desktop as a JPEG.

Figure 17-7. AIR desktop companion application to display images received from the device
Figure 17-7. AIR desktop companion application to display images received from the device

Another technology, not demonstrated here, is Pixel Bender, used for image manipulation. It is not available for AIR for Android but is for AIR on the desktop. So this would be another good use case where devices and the desktop can complete one another.


Posted

in

by

Tags: