Blog ⋅ Peter Nelson

Sunday afternoon hack: Pokémon Blue

by peterdn 23. February 2014 19:36

Inspired by the recent Twitch Plays Pokémon phenomenon, I thought it would be interesting to delve into some old school ROM hacking with the first generation Pokémon games. In particular, Pokémon Red and Blue are already relatively well documented by the ROM hacking community and have several infamous bugs, technical analysis of which reveal useful information about the game internals. Exploiting this existing knowledge means we can cheat a little and not get bogged down in the drudgery of analysing a ROM from scratch.

Required Tools

We will require:

  1. Gameboy Color emulator and debugger
  2. Hex editor
  3. Pokémon ROM

On Windows, bgb appears to be one of the best choices of emulator as it features a built in debugger. This includes disassembly, register, and memory views (shown below). The hex editor I’m using is WinHex. As for the game ROM, it goes without saying that you should own a physical copy of the game before you download a ROM.

For this guide I am using Pokémon Blue, however I believe it should also work with Red.

The Goal

Our goal will be to change one of the starting Pokémon, Bulbasaur, into something a bit more interesting: Mew (yes, Mew is built into the Generation 1 games). This should prove to be non-trivial but easy enough to do in an afternoon. In addition, choosing your character’s first Pokémon occurs after 30 seconds of gameplay meaning we can test the hack without needing to play for hours beforehand.

Setup

Load up the emulator. Let’s take a snapshot of the game at the point just before we choose a Pokémon so we don't have to navigate the dialogue every time. For readers unfamiliar with the game, the left-hand screenshot below shows the desired game state. Visible are 3 Pokemon the player must choose from on a desk: Charmander, Squirtle, and Bulbasaur (left to right). We are attempting to change the rightmost from Bulbasaur to another Pokémon.

Note that when the player selects a Pokémon, some information about it is displayed in the form of a Pokedex entry, shown in the right-hand screenshot below:

clip_image002 clip_image004

Technical Background

Hardware

Before we begin, it is necessary to have a rudimentary understanding of the console’s internals. The Gameboy Color’s CPU is an 8-bit modified Zilog Z80 with a 16-bit address bus meaning it can access 65,536 byte addresses. To overcome this 64KB limitation (especially considering the Pokemon ROM is 1MB, for example), memory bank switching is used. The first 16KB of this address space (actually slightly less; from address 0x0100-0x3FFF) is mapped to the first 16KB of the cartridge ROM. This area is referred to as ROM bank 0. The next 16KB of address space (from 0x4000-0x7FFF) can be mapped to any other 16KB bank within the ROM.

It is important that we are able to convert between ROM addresses and internal addresses. ROM bank X extends from ROM address X * 0x4000 to (X + 1) * 0x4000 - 1. As mentioned before, this is then mapped to internal addresses 0x4000-0x7FFF. Therefore, to find the ROM address corresponding to a particular bank and internal address, we use the following conversion formula:

ROM address = (internal address - 0x4000) + bank * 0x4000

And conversely:

Internal address = ROM address % 0x4000 + 0x4000

bank = ROM address / 0x4000

Pokémon ROM

Internally, Pokémon are uniquely identified by index numbers which are completely distinct from the Pokédex numbers that readers may be familiar with. For example, while Bulbasaur has a Pokédex number of 1, its index number is 0x99. From this article (which is a recommended read in itself), we know that there exists a lookup table at address 0x41024 in the ROM that maps index numbers to corresponding Pokédex entry numbers. For example, the byte at ROM address 0x41024 + 0x99 = 0x410BC has a value of 1 and therefore represents the mapping for Bulbasaur.

Hacking

We begin by considering how a reasonable implementation of the game might work. Intuitively, we might expect that somewhere in the ROM are 3 bytes, one for each Pokémon on the desk, containing that Pokémon's index number. We would then expect that once a Pokémon is selected by the player, the aforementioned lookup table will be consulted to find the Pokédex index for that index number in order to display the corresponding Pokédex entry (e.g. for Bulbasaur as shown in the screenshot above). If we can detect when this value is read in memory, perhaps we can find the location of where the original index is stored.

Luckily, bgb supports memory access breakpoints. Recall that the byte at ROM address 0x410BC contains the Pokédex number for Bulbasaur. Using our conversion formula we find this should be mapped to internal address 0x410BC % 0x4000 + 0x4000 = 0x50BC. This is where we set our breakpoint, triggered on a memory read:

We then resume our game and select Bulbasaur. After a few seconds, the debugger breaks. In the memory view, we check to confirm that the correct bank (0x410BC / 0x4000 = 0x10) is selected. It is -- success! The bgb debugger at this point is shown below:

Now we take a look at the disassembly view. The debugger is paused on the instruction that triggered the memory access breakpoint (highlighted in blue). This instruction reads the byte at address held in register HL and writes it to register A. We can verify that the HL register contains the address 0x50BC using the register view at the top right.

We can see this looks like a function that begins at address 0x5010:

push  bc
push  hl
ld    a, (D11E)
dec   a
ld    hl, 5024
ld    b, 00
ld    c, a
add   hl, bc
ld    a, (hl)
ld    (D11E), a
pop   hl
pop   bc
ret

In line 3, a byte is read from address 0xD11E and added to the constant 0x5024---which looks suspiciously like the base address of the index number lookup table---to result in the address 0x50BC. Aha! Lets follow that clue to address 0xD11E and see what we find there. Indeed we find that the byte at that address contains the value 0x99 (highlighted grey in the memory view):

Address 0xD11E is in an area of internal RAM, therefore it is not present in the ROM and must have been written by the game program. So where does it get written from originally? Let’s add an on write access breakpoint to find out. After reloading our saved state and selecting Bulbasaur again, we break this time at address 0x5136 (ROM bank 7). The disassembly at that location looks like:

Aha, that instruction at address 0x512F looks exactly like what we want -- the index number appears to be a hardcoded operand! Lets open up the ROM in our hex editor and modify the corresponding byte. Using our conversion formula, ROM address = (0x512F – 0x4000)+ 7 * 0x4000 = 0x1D12F. As the instruction is 2 bytes, the operand is actually at address 0x1D12F + 1. We change it to 0x15 – the index number of Mew:

Now we save the file and restart the game. After a few warnings about invalid checksums, we have success!

 

What next?

Here follows a non-exhaustive list of things to try:

  • Change the other 2 starting Pokémon.
  • Change the Prof. Oak dialogue (notice that he still refers to Bulbasaur, so this text must be located somewhere else).
  • Change the Pokémon starting levels.
  • Give the starting Pokémon more HP/Attack Power/moves (hint: read).
  • Change the Pokémon's graphics.

Happy hacking!

Tags: ,

Hacking

An Introduction to ARM NEON

by peterdn 3. January 2014 21:07

NEON is ARM’s take on a single instruction multiple data (SIMD) engine. As it becomes increasingly ubiquitous in even low-cost mobile devices, it is more worthwhile than ever for developers to take advantage of it where they can. NEON can be used to dramatically speed up certain mathematical operations and is particularly useful in DSP and image processing tasks.

In this post I will show how it can improve a simple image processing algorithm and will compare it to several other approaches. My particular implementation will be written for Windows Phone 8, however the principles should be platform agnostic and therefore easily applicable to Android and iOS as well.

Testbed application: Sepiagram

We will use NEON to speed up an exciting image processing task: applying sepia tone to a photograph. The basic algorithm given in this blog post will serve as a starting point. Output RGB values are a simple linear combination of input RGB values, with red and green given higher weights than blue in order to give the image a yellowish hue:

outputRed = (inputRed * 0.393) + (inputGreen * 0.769) + (inputBlue * 0.189) 
outputGreen = (inputRed * 0.349) + (inputGreen * 0.686) + (inputBlue * 0.168) 
outputBlue = (inputRed * 0.272) + (inputGreen * 0.534) + (inputBlue * 0.131)

The app skeleton is as equally simple, consisting of an image view, image chooser, and several buttons for applying our various implementations of the sepia tone algorithm. I’ve also included some timing code that allows us to very roughly compare the performance of each approach.

Naive C# implementation

Our first implementation in C# is as straightforward as one might expect:

public static void ApplySepia(WriteableBitmap Bitmap)
{
    for (var i = 0; i < Bitmap.Pixels.Length; ++i)
    {
        var pixel = Bitmap.Pixels[i];

        // Extract red, green, blue components.        
        var ir = (pixel & 0xFF0000) >> 16;
        var ig = (pixel & 0xFF00) >> 8;
        var ib = pixel & 0xFF;
        
        // Apply the transformation.
        var or = (uint)(ir * 0.393f + ig * 0.769f + ib * 0.189f);
        var og = (uint)(ir * 0.349f + ig * 0.686f + ib * 0.168f);
        var ob = (uint)(ir * 0.272f + ig * 0.534f + ib * 0.131f);

        // Saturate the result.
        or = or > 255 ? 255 : or;
        og = og > 255 ? 255 : og;
        ob = ob > 255 ? 255 : ob;

        // Write the resulting pixel back to the bitmap.
        Bitmap.Pixels[i] = (int)(0xFF000000 | or << 16 | og << 8 | ob);
    }
}

The only thing that might be immediately non-obvious is how we must saturate the output results. As we’re working with 32-bit integers and the color weights total to more than 1, it’s possible for the output values to be greater than 255. We therefore cap (saturate) them to 255.

On my Lumia 1020 this implementation takes around 600-800ms for a 3072x1728 pixel image. Not bad, but we can do much better.

wp_ss_20140101_0001wp_ss_20140101_0002

Naive to native: C++

It turns out that floating point operations are slow1 and an immediate improvement can be made by modifying the algorithm slightly to process integers instead of floats. We can scale up each color weight by some factor, apply the transformations, then scale the results back down. As our RGB values are integers between 0 and 255 anyway, this completely eliminates any need for floating point arithmetic. We will choose 1024 as our scale factor as the color weights are given to 3 decimal places (so we lose no precision) and it is a convenient power-of-2. Hence, we can rewrite the equations as:

outputRed = ((inputRed * 402) + (inputGreen * 787) + (inputBlue * 194)) / 1024
outputGreen = ((inputRed * 357) + (inputGreen * 702) + (inputBlue * 172)) / 1024
outputBlue = ((inputRed * 279) + (inputGreen * 547) + (inputBlue * 134)) / 1024

In C++, this becomes:

void NEONRT::NEONRT::IntegerSepia(const Platform::Array^ image)
{
    uint32_t *px = (uint32_t *)image->Data;
    uint32_t *end = px + image->Length;

    for (; px < end; ++px) {
        // Extract red, green, blue components.
        unsigned int ir = (*px & 0x00FF0000) >> 16;
        unsigned int ig = (*px & 0x0000FF00) >> 8;
        unsigned int ib = (*px & 0x000000FF);

        // Apply the transformation.
        unsigned int or = (ir * 402 + ig * 787 + ib * 194) >> 10;
        unsigned int og = (ir * 357 + ig * 702 + ib * 172) >> 10;
        unsigned int ob = (ir * 279 + ig * 547 + ib * 134) >> 10;

        // Saturate the result.
        or = or > 255 ? 255 : or;
        og = og > 255 ? 255 : og;
        ob = ob > 255 ? 255 : ob;

        // Write the resulting pixel back to the bitmap.
        *px = 0xFF000000 | (unsigned int)or << 16 | (unsigned int)og << 8 | (unsigned int)ob;
    }
}

Despite the fact we have 3 more divisions per pixel in this version than the original, this implementation takes around 300-400ms on the same image already a 2x speed increase.

Utilizing NEON

To briefly recap: NEON is a single instruction multiple data (SIMD) architecture meaning it can perform the same arithmetic operation on multiple data values in parallel. It has 32x 64-bit registers, named d0-d31 (which can also be viewed as 16x 128-bit registers, q0-q15)2. These registers are considered as vectors of elements of the same data type. NEON arithmetic instructions typically distinguish between these data types in order to apply the same operation to all lanes. For example, vadd.f32 considers a 64-bit register as 2x 32-bit floats, whereas vadd.i8 considers it as 8x 8-bit signed integers. For a more substantive description, please see the official ARM documentation.

Back to our sepia color transformation equations.

On a conventional single instruction single data machine, the sepia algorithm requires a total of 9 multiplications and 6 additions per pixel. Using NEON SIMD instructions, however, we can operate on vectors all in one go. We can rewrite the above formula in terms of vector multiplications and additions:

[outputRed  outputGreen  outputBlue] = inputRed   * [0.393  0.349  0.272]
                                     + inputGreen * [0.769  0.686  0.534]
                                     + inputBlue  * [0.189  0.168  0.131]

This requires only 3 vector multiplications and 2 vector additions per pixel. In fact, NEON includes a vector multiply and accumulate instruction which simultaneously performs a vector multiplication and addition. Using 1 multiply and 2 multiply-accumulates, we can reduce the total number of operations to 3.

We will walk through one iteration of a loop that processes multiple pixels at a time. We have a 32-byte chunk of memory containing 8x 32-bit pixels. These pixels are further subdivided into 4x 8-bit subpixels – alpha, red, green and blue (ARGB). Each block in the figure below represents 1 byte:

image24

NEON allows us to load 32 bytes using a single opcode:

vld1.u32 { d20, d21, d22, d23 }, [r0]

This has the effect of loading 2 pixels into each of the registers d20-d23. Register r0 is a pointer to the 8-pixel block of memory within the bitmap. Now we have:

image37

It should be immediately obvious that we cannot simply multiply in-place by our weights as each subpixel’s value will probably overflow past 255. Therefore we must extend each subpixel to 16 bits:

vmovl.u8 q0, d20
vmovl.u8 q1, d21
vmovl.u8 q2, d22
vmovl.u8 q3, d23

Note the .u8 suffix on these instructions tell NEON to treat the input registers (d20-d23 in this case) as vectors of 8-bit values (i.e. the subpixels). This is an important distinction as our output would be structured differently if, for example, we used the similar vmovl.u16 instruction3. Now the two pixels contained in 64-bit-wide d20 are extended and copied to the 128-bit-wide q0, and similarly for d21-d23 and q1-q3. Lets now consider only the first pixel, which is contained in register d0:

image36

Again, 2 bytes per subpixel is not enough to guarantee we won’t overflow: considering that we are multiplying each subpixel (max 2^8) by its combined weights (> 1) and also by 1024 (2^10), we need at least 18 bits to represent the intermediate values before we again divide by 1024. Luckily, NEON provides vector multiply and multiply-accumulate instructions that automatically widen their outputs. We therefore perform our three vector arithmetic operations, starting with a multiply:

vmull.u16 q4, d16, d0[2]

image61

Here q4 is the destination register, d16 contains our red color weights and d0 contains the pixel we are currently operating on. Note that this form of the vmul instruction takes a 16-bit scalar as its 3rd argument and hence we select the red subpixel by subscripting d0. Next, we multiply by the green weights, contained in d17, and accumulate into the destination register:

vmlal.u16 q4, d17, d0[1]

image63 

And then similarly for the blue weights, contained in d18:

vmlal.u16 q4, d18, d0[0]

After repeating for the other 7 pixels (using a unique q register for each, we have 16 after all!), we then perform a right shift by 10 to divide by 1024, narrow and saturate:

vqshrn.u32 d0, q4, 10
...

Here d0 is the destination register, q4 still contains our first pixel, and 10 is a constant. This is the saturating form of the shift-right instruction meaning it will cap output values to 2^16 - 1 if they would otherwise overflow their smaller destinations. After repeating another 7 times, we perform the final saturating narrow to fit our pixels back into 4-bytes:

vqmovn.u16 d0, q0
...

image69

Finally, we set the alpha value to 255 using a bitwise OR and write the results back out to the bitmap:

vorr d0, d0, d19
...
vst1.32 { d0, d1, d2, d3 }, [r0]

Here d0-d3 contain our 8 pixels, d19 contains the constant 0xFF000000FF000000, and r0 is still a pointer to the 8-pixel block in the bitmap.

What else?

The rest of the assembly routine is mainly concerned with setting up the registers containing the color weights, and looping. A link to the full source code of the app can be found at the end of this post.

Results

The resulting NEON implementation takes roughly 80-100ms – an impressive difference, considering the assembly routine is probably not particularly optimized.

To summarize, the results on my Lumia 1020 for each implementation are as follows:

Implementation Time/ms (average of 10 runs)
C# (floating point) 697
C++ (integer) 228
NEON 94

Why not NEON intrinsics?

NEON intrinsics are C/C++ macros that compile down to single NEON instructions, at least in theory. In practice I have found that modern compilers still produce downright awful NEON code. My first implementation of the sepia algorithm using NEON intrinsics in C++ performed worse than the naive C# implementation. Looking at the generated assembly, the compiler seems to love pointlessly4 spilling registers into RAM which drastically reduces performance. Thinking that the compiler simply wasn’t intelligent enough to unroll each loop, I manually unrolled them, keeping the theoretical effect of the code exactly the same. The algorithm then did run much faster, but also completely incorrectly, producing vertical blue lines in the output. I didn’t have the time or will to work out why, but either it’s a bug in the Visual C++ compiler or I’m breaking some rules somewhere that I don’t yet know about4.

Final thoughts

  • NEON can dramatically improve performance of algorithms that can take advantage of data parallelism.
  • Compiler support for NEON is still terrible.
  • It’s not much more difficult to write assembly than struggle with NEON instrinsics. You can beat the compiler.
  • It’s can be worth converting your algorithm to avoid floating point operations1.
  • Don’t make any conclusions about performance without thorough benchmarking.

Source code

Download Visual Studio 2013 project.

Notes

  1. Not really true in general. Whether floating point operations are slower than integer operations depends on a vast number of factors. From what I’ve experienced on ARM, however, it is often the case.
  2. NEON actually shares its registers with the floating point unit, if one exists.
  3. vmovl.u8 will transform 0xFFFFFFFF to 0x00FF00FF00FF00FF, whereas vmovl.u16 will transform it to 0x0000FFFF0000FFFF.
  4. There may be a good reason I don’t know about, such as certain registers being expected to be preserved across function calls. I’d have thought the compiler would deal with this, however...

Tags: , , ,

ARM | Windows Phone 8

Using Git to auto-publish a website to Windows Server 2008: Remastered

by peterdn 15. July 2012 17:22

My previous blog post on this subject is now out of date, mainly because the version of COPSSH that I used is no longer a free product. What follows is an updated guide, using bog-standard Cygwin to achieve the same results.

Note that this guide is largely given “by example”. The reader will very likely have to make changes to the names of paths and URLs to fit their own particular setup. It is also assumed that the reader is already familiar with Cygwin (or at least Unix) and of course, Windows Server 2008.

Setting up OpenSSH

  1. Download and install Cygwin if you have not already done so. For our purposes, the only additional packages required are OpenSSH and Git. A text editor such as vim or mcedit is also useful. I installed mine to C:\cygwin.

  2. Launch the Cygwin terminal (as administrator). Run the ssh-host-config command to set up our SSH server. The following interactive dialog is then initiated. Be sure to answer ‘yes’ to use privilege separation, to create an sshd user, to install sshd as a service, and to create a privileged user cyg_server:

    $ ssh-host-config
    
    *** Info: Generating /etc/ssh_host_key
    *** Info: Generating /etc/ssh_host_rsa_key
    *** Info: Generating /etc/ssh_host_dsa_key
    *** Info: Generating /etc/ssh_host_ecdsa_key
    *** Info: Creating default /etc/ssh_config file
    *** Info: Creating default /etc/sshd_config file
    *** Info: Privilege separation is set to yes by default since OpenSSH 3.3.
    *** Info: However, this requires a non-privileged account called 'sshd'.
    *** Info: For more info on privilege separation read /usr/share/doc/openssh/README.privsep.
    *** Query: Should privilege separation be used? (yes/no) yes
    *** Info: Note that creating a new user requires that the current account have
    *** Info: Administrator privileges.  Should this script attempt to create a
    *** Query: new local account 'sshd'? (yes/no) yes
    *** Info: Updating /etc/sshd_config file
    
    *** Query: Do you want to install sshd as a service?
    *** Query: (Say "no" if it is already installed as a service) (yes/no) yes
    *** Query: Enter the value of CYGWIN for the daemon: []
    *** Info: On Windows Server 2003, Windows Vista, and above, the
    *** Info: SYSTEM account cannot setuid to other users -- a capability
    *** Info: sshd requires.  You need to have or to create a privileged
    *** Info: account.  This script will help you do so.
    
    *** Info: You appear to be running Windows XP 64bit, Windows 2003 Server,
    *** Info: or later.  On these systems, it's not possible to use the LocalSystem
    *** Info: account for services that can change the user id without an
    *** Info: explicit password (such as passwordless logins [e.g. public key
    *** Info: authentication] via sshd).
    
    *** Info: If you want to enable that functionality, it's required to create
    *** Info: a new account with special privileges (unless a similar account
    *** Info: already exists). This account is then used to run these special
    *** Info: servers.
    
    *** Info: Note that creating a new user requires that the current account
    *** Info: have Administrator privileges itself.
    
    *** Info: No privileged account could be found.
    
    *** Info: This script plans to use 'cyg_server'.
    *** Info: 'cyg_server' will only be used by registered services.
    *** Query: Do you want to use a different name? (yes/no) no
    *** Query: Create new privileged user account 'cyg_server'? (yes/no) yes
    *** Info: Please enter a password for new user cyg_server.  Please be sure
    *** Info: that this password matches the password rules given on your system.
    *** Info: Entering no password will exit the configuration.
    *** Query: Please enter the password: XXXXX
    *** Query: Reenter: XXXXX
    
    *** Info: User 'cyg_server' has been created with password 'XXXXX'.
    *** Info: If you change the password, please remember also to change the
    *** Info: password for the installed services which use (or will soon use)
    *** Info: the 'cyg_server' account.
    
    *** Info: Also keep in mind that the user 'cyg_server' needs read permissions
    *** Info: on all users' relevant files for the services running as 'cyg_server'.
    *** Info: In particular, for the sshd server all users' .ssh/authorized_keys
    *** Info: files must have appropriate permissions to allow public key
    *** Info: authentication. (Re-)running ssh-user-config for each user will set
    *** Info: these permissions correctly. [Similar restrictions apply, for
    *** Info: instance, for .rhosts files if the rshd server is running, etc].
    
    
    *** Info: The sshd service has been installed under the 'cyg_server'
    *** Info: account.  To start the service now, call `net start sshd' or
    *** Info: `cygrunsrv -S sshd'.  Otherwise, it will start automatically
    *** Info: after the next reboot.
    
    *** Info: Host configuration finished. Have fun!
  3. (Optional) To disable root and password logins, add the following line to /etc/sshd_config:

    PermitRootLogin no
    PasswordAuthentication no
  4. (Optional) To allow only a specific user (or users) to connect, add the following line to /etc/sshd_config:

    AllowUsers <username>
  5. (Optional) To synchronize Windows user accounts with Cygwin, for example if a new Windows user was created:

    mkpasswd -l > /etc/passwd
  6. Start the service with net start sshd.

Enabling public key authentication

  1. It is advisable to use a standard Windows user account for SSH and Git access. For the rest of the guide I will assume we are using a user called ‘newuser’.

  2. You may need to create a home and .ssh directory for newuser if they do not already exist:

    mkdir -p /home/newuser/.ssh
    chown -R newuser /home/newuser
  3. Add your public key info to /home/newuser/.ssh/authorized_keys. If you need to create these and don’t know how, there are plenty of guides for using either PuTTYgen to achieve this.

  4. Test that you can now log in via SSH. If not, make sure that the sshd service is running, your firewall is properly configured, and your user settings and keys are all correct.

Setting up Git

  1. Log in to the remote server via SSH. If you can do this, most of the work is already done.

  2. Create a new empty Git repository on the remote server. For example:

    $ git init --bare ~/test.git
    Initialized empty Git repository in /home/newuser/test.git/
  3. If everything is set up correctly, cloning the remote repository should now work. For example, with my setup:

    $ git clone ssh://newuser@peterdn.com/home/newuser/test.git
    Cloning into 'test'...
    Enter passphrase for key '/home/Peter/.ssh/id_rsa':
    warning: You appear to have cloned an empty repository.
  4. Make a few additions and test pushing:

    $ cd test/
    
    $ echo hello > hello.txt
    
    $ git add hello.txt
    
    $ git commit -m "Test commit"
    [master (root-commit) f465617] Test commit
     1 files changed, 1 insertions(+), 0 deletions(-)
     create mode 100644 hello.txt
    
    $ git push origin master
    Enter passphrase for key '/home/Peter/.ssh/id_rsa':
    Counting objects: 3, done.
    Writing objects: 100% (3/3), 223 bytes, done.
    Total 3 (delta 0), reused 0 (delta 0)
    To ssh://newuser@peterdn.com/home/newuser/test.git
     * [new branch]      master –> master

 

Auto-publishing a website

The idea is that we maintain a central bare Git repository and our website is a clone of this. We use the Git post-receive hook to automatically pull changes from the central repository to the website. Therefore, when we push changes from a remote client, these changes are automatically reflected in the website. In my setup, my bare Git repository is located in C:\inetpub\git\mysite.git and my wwwroot is located in C:\inetpub\wwwroot\mysite.peterdn.com.

  1. Log in to the remote server via SSH. Create symbolic links to the git and wwwroot directories for convenience (remember we are in a cygwin environment here, and also make sure newuser has the required permissions on these directories):

    ln -s /cygdrive/c/inetpub/git git
    ln -s /cygdrive/c/inetpub/wwwroot wwwroot

    The bare repository can now be accessed at ssh://myuser@peterdn.com/home/newuser/git/mysite.git.

  2. Ensure that your website clone has a remote that correctly points to the bare repository (cygwin path; the local remote is correct in this case):

    $ cd ~/wwwroot/mysite.peterdn.com/
    
    $ git remote -v
    local   /home/newuser/git/mysite.git (fetch)
    local   /home/newuser/git/mysite.git (push)
    origin  C:/inetpub/git/test.git (fetch)
    origin  C:/inetpub/git/test.git (push)

    If not, add a new one. For example:

    $ git remote add local /home/newuser/git/mysite.git
  3. Add the following to the /home/newuser/git/mysite.git/hooks/post-receive script. This executes when a push is completed successfully. It is necessary to unset GIT_DIR so that git-pull uses the current working directory instead of the target git repository:

    #!/bin/sh
    unset GIT_DIR
    cd /home/newuser/wwwroot/mysite.peterdn.com
    git pull local master
  4. Done!

Tags: , , ,

Git

‘Hello World!’ in ARM assembly

by peterdn 14. January 2012 22:29

Over the last few weeks, in an effort to port a small C library to the platform, I’ve been doing a fair bit of tinkering around with the Android NDK.  The NDK is primarily intended to allow Android developers to write performance-critical portions of their apps in native C or C++, which interface with the Android Java API through JNI.  As the C library in question required porting some x86 SIMD assembly, I figured it would be helpful for me to get to know the bare bones of the ARM architecture.  As a means to this end, we can use the NDK’s cross-compiler as a standalone tool to write a simple ‘Hello World!’ console “app” in ARM assembly.  As Android is effectively Linux under the hood, we can apply our x86 Linux assembly programming skills to the ARM platform.

First things first: ‘Hello World!’ in x86 assembly

Briefly, the method involves invoking system calls by talking directly to the underlying Linux kernel.  An example of how to do this in x86 assembly is given here.  As ARM uses a different ABI to x86 (registers are named different things, for a start), this code needs a tiny bit of modification.

Firstly, notice that system calls are identified numerically – in the x86 example, #1 refers to exit() and #4 refers to write().  To invoke a system call, we put its identifier in the EAX register, pass (up to 6) arguments in EBX, ECX, EDX, ESI, EDI, EBP, respectively, and interrupt 0x80 is generated.  This is described in further detail here.  In contrast, the ARM ‘EABI’ calling convention uses a different method which is described vaguely in these patch notes.  We can glean that, on ARM, the system call identifier is put in register R7, arguments are passed in R0-R6 (respecting “EABI arrangement” where appropriate, i.e. 64-bit arguments), and the kernel is called with the ‘SWI 0’ instruction. 

Secondly, as they are not guaranteed to be the same on each platform, we must look up the system call identifiers for exit() and write().  For this we refer to the Linux kernel source – $LINUX_SOURCE_ROOT/arch/arm/include/asm/unistd.h, specifically.  As it turns out, these two system calls do have the same identifiers on both x86 and ARM platforms.

To ARM

So our assembly code, in GAS syntax, looks very much like (see inline comments for details):

.data

msg:
    .ascii      "Hello, ARM!\n"
len = . - msg


.text

.globl _start
_start:
    /* syscall write(int fd, const void *buf, size_t count) */
    mov     %r0, $1     /* fd -> stdout */
    ldr     %r1, =msg   /* buf -> msg */
    ldr     %r2, =len   /* count -> len(msg) */
    mov     %r7, $4     /* write is syscall #4 */
    swi     $0          /* invoke syscall */
   
    /* syscall exit(int status) */
    mov     %r0, $0     /* status -> 0 */
    mov     %r7, $1     /* exit is syscall #1 */
    swi     $0          /* invoke syscall */

Assembling

Save the above as hello.S and run it through the GNU cross-assembler provided with the NDK.  I will assume that you have the prebuilt NDK toolchain directory in your PATH (in my case here, /Users/peterdn/android-ndk/toolchains/arm-linux-androideabi-4.4.3/prebuilt/darwin-x86/bin):

arm-linux-androideabi-as -o hello.o hello.S
arm-linux-androideabi-ld -s -o hello hello.o

Deploying to Android

For many, the easiest way to test the above binary is by deploying it to an Android device with an ARM processor.  This also means we can take advantage of the insanely useful ‘adb’ tool.  If you happen to be running bog-standard Linux on an ARM device, the binary should still run, providing your kernel supports the newer EABI (I believe 2.6.15 and above).

To deploy and test on Android, simply run:

adb push hello /data/local/tmp/hello
adb shell /data/local/tmp/hello

It is also possible to run the binary locally on your device using the Android Terminal Emulator, as below:

Android Terminal Emulator screenshot

Enjoy!

Tags: , , , ,

Android | NDK

Using Git to auto-publish a website to Windows Server 2008

by peterdn 4. November 2011 10:07

(Updated July 15 2012.  much of the information presented here is now obsolete -- check out my updated guide.)

This is a quick and dirty guide.  YMMV.

I was motivated by the following:

  1. I want to work on and publish my website from several different machines, running different operating systems, with a minimum of fuss.
  2. After saving any changes to my central repository, I ideally want to see those changes with only a refresh of my browser.
  3. FTP publishing is generally clunky as hell.
  4. I’d done a similar thing before for a client, using SVN, and it worked well.
  5. Git is great.
  6. Whilst GitHub is fantastic, it costs money for private repositories and I would rather not pay for something when I can use my own VPS.
  7. Auto-publishing to my web server would be more difficult if my code was hosted on GitHub, anyway.

Setting up a Git server on Windows

This part is heavily inspired by the excellent and detailed guide by Tim Davis.  Since writing, some of his steps have changed and/or are unnecessary and/or I chose to do them in a slightly different way.  If you run into any problems with any of the steps listed here, please check out his guide for some solutions to common issues.  Here follows a brief summary of what I did:

  1. Download and install COPSSH to C:\ssh.

  2. Create a new Windows user (optional; you can enable SSH access for an existing user if you prefer).

  3. Open COPSSH Control Panel –> Users –> Add.  Enter details for the user account <user> you want to enable SSH access for.  Choose Linux shell + Sftp.  I chose to allow both password and public key authentication.

  4. Add a new inbound rule to the Windows Firewall to allow SSH traffic in (port 22, unless you changed it).

  5. Add your public key to C:\ssh\home\<user>\.ssh\authorized_keys (create if needed).  If you need to generate a public/private key pair, use PuTTYgen.

  6. Test that everything is working so far by attempting to connect from another machine.

  7. Download and install msysgit (at time of writing, Git-1.7.7.1-preview20111027.exe is the file you want) to C:\git.

  8. Open C:\ssh\home\<user>\.bashrc and add the following to the end of the file:

    export PATH=$PATH:/cygdrive/c/git/bin
  9. Copy git-receive-pack.exe and git-upload-pack.exe from C:\git\libexec\git-core to C:\git\bin.

  10. Verify that everything works by initialising a bare repository, cloning it, and pushing some changes.  If you have a repository in C:\ssh\home\<user>\test.git, it can be accessed via ssh://<user>@<hostname>/ssh/home/<user>/test.git (TODO: find out why it’s rooted at /ssh).  The following command-line session is fairly representative of success:

    peterdn@ubuntu:~$ ssh 192.168.61.131
    Last login: Thu Nov  3 21:45:58 2011 from 192.168.61.130
    
    peterdn@WIN-53SSN6PLF6F ~
    $ mkdir test.git
    
    peterdn@WIN-53SSN6PLF6F ~
    $ git init --bare test.git/
    Initialized empty Git repository in C:/ssh/home/peterdn/test.git/
    
    peterdn@WIN-53SSN6PLF6F ~
    $ exit
    logout
    Connection to 192.168.61.131 closed.
    
    peterdn@ubuntu:~$ git clone ssh://peterdn@192.168.61.131/ssh/home/peterdn/test.git test
    Initialized empty Git repository in /home/peterdn/test/.git/
    warning: You appear to have cloned an empty repository.
    
    peterdn@ubuntu:~$ cd test
    
    peterdn@ubuntu:~/test$ echo hello > hello.txt
    
    peterdn@ubuntu:~/test$ git add hello.txt
    
    peterdn@ubuntu:~/test$ git commit -m "initial commit"
    [master (root-commit) 64f843d] initial commit
     1 files changed, 1 insertions(+), 0 deletions(-)
     create mode 100644 hello.txt
    
    peterdn@ubuntu:~/test$ git push origin master
    Counting objects: 3, done.
    Writing objects: 100% (3/3), 219 bytes, done.
    Total 3 (delta 0), reused 0 (delta 0)
    To ssh://peterdn@192.168.61.131/ssh/home/peterdn/test.git
     * [new branch]      master –> master

Setting up a repository for auto-publish

Now that we have our Git server up and running, we can begin to invoke the black magic required to implement auto-publishing.  This part of the guide will be purely by-example; your setup with undoubtedly differ from mine, but hopefully the principles remain the same.  My desired setup looks like so:

  • Git repository is located in C:\inetpub\git\mysite.git.
  • Website is located in C:\inetpub\wwwroot\mysite.peterdn.com.  This is a clone of the above repository.

Note: for some reason that I’m yet to fathom, my git*.exe binaries live in a different virtual directory environment to the shell, and of course, Windows.  For example, what my shell thinks is /home/peterdn, git.exe thinks is /c/ssh/home/peterdn.  This has turned out to bite me a couple of times, but if you’re aware of the problem, it might make things easier to diagnose.

Now:

  1. In C:\ssh\home\<user>, create a symbolic link to C:\inetpub\git, using the following command:

    mklink.exe /D git C:\inetpub\git
  2. Similarly, create a symbolic link to C:\inetpub\wwwroot.

  3. Make sure that <user> has appropriate permissions for C:\inetpub\git\mysite.git and C:\inetpub\wwwroot\mysite.peterdn.com.

  4. Due to the bizarre path problem mentioned above, and the fact that I’d cloned <website> from within cmd.exe, my remote config currently looks like:

    $ git remote -v
    origin  C:\inetpub\git\mysite.git (fetch)
    origin  C:\inetpub\git\mysite.git (push)

    This will make things unhappy if we attempt to pull from origin in the Cygwin environment.  Therefore, I added another remote:

    git remote add local /c/ssh/home/peterdn/git/mysite.git
  5. Now we add the hook that will automatically pull changes whenever the main repository receives changes.  Add the following to C:\inetpub\git\mysite.git\hooks\post-receive:

    #!/bin/sh
    unset GIT_DIR
    cd /c/ssh/home/peterdn/wwwroot/mysite.peterdn.com
    git pull local master
  6. Verify that it works by pushing some changes.

Future Improvements

The results of the above are sufficient for the moment.  However, there are a few things that I would consider changing or adding in the future:

  1. Remove the requirement for the /ssh/home/<user>/git component from the URL, as it looks messy and is just generally bad form.
  2. Sort out issues with different virtual directory structures.
  3. Have a separate website for testing out changes without having to mess with the live site.  I assume this is simple as having separate “stable” and “current” branches, and pulling from these to the appropriate places.  But I could be wrong.
  4. Remove the .git directory from my web root.  I believe this is possible using a detached work tree.
  5. GitHub-style web interface for managing my repositories.

Tags: , , ,

Git

To make a Metro Appx Package from scratch, you must first …

by peterdn 18. September 2011 12:42

Windows 8 has been big news this week following its announcement and preview at the Microsoft BUILD conference.  After a bit of playing around with Visual Studio on the developer preview, I’ve become very intrigued with the new Metro platform and WinRT APIs.  Running through a few of the tutorials and samples, it looks very different to traditional Windows programming, and on the whole, quite promising and refreshing.

If you’re anything like me, you’ll be wanting to know what goes on beneath the surface when Visual Studio performs the incantations that result in your code coming to life.  So, I decided to write and package up a Metro app (almost) by hand, to see for myself exactly what is involved, and hopefully learn a thing or two about the new platform and tools.

Requirements


  1. The Windows 8 developer preview (with developer tools).
  2. A vague awareness of what Metro and WinRT are (see links at the end of this post if you’ve no idea what I’m talking about).
  3. Familiarity with digital certificates and Visual Studio command-line tools OR enough trust in me to blindly run commands I tell you to.
  4. A desire to learn about the processes involved in packaging and deploying Metro apps.

Overview

An overview of the process pipeline we will be following is shown below.  If you have experience working with Visual Studio command-line tools, several of these stages may already be familiar to you.

Process Pipeline

All code and resources used henceforth can be downloaded here or from the link at the end of the post.

The Code

Lets begin by writing a very simple Metro-style app that contains a Grid control and a Button that does nothing.  To simplify things at the command-line, these UI elements are created procedurally, rather than being defined in XAML markup, and the following code is all contained in the one file (MyApp.cs).  If you’re familiar with WPF (or indeed not) it should be reasonably obvious what is going on here:

using System;
using Windows.ApplicationModel.Activation;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Media;

namespace MyApp
{
    class Program
    {
        public static void Main (string[] args)
        {
            var app = new MyApp();
            app.Run();
        }
    }
    
    class MyApp : Windows.UI.Xaml.Application
    {
        public MyApp()
        {
        }
        
        protected override void OnLaunched(LaunchActivatedEventArgs args)
        {
            var layoutRoot = new Grid() { Background = new SolidColorBrush(Colors.Blue) };
            layoutRoot.Children.Add(new Button() { Content = "Hello!" });

            Window.Current.Content = layoutRoot;
            Window.Current.Activate();
        }
    }
}

Appx Manifest

Next, we write up the application manifest.  AppxManifest.xml contains metadata about the package and the application contained in it.  This includes information such as the app’s entry point, it’s capabilities (similar to Android permissions), and dependencies.  If you have already created a Metro app using Visual Studio, you may have noticed that a file named “Package.appxmanifest” is among those automatically generated for you.  Behind the scenes, when you hit build, Visual Studio chews up this file and spits out an AppxManifest.xml.  Let’s not worry about what black magic goes on there and just manually create our own:

<?xml version="1.0" encoding="utf-8"?>
<Package xmlns="http://schemas.microsoft.com/appx/2010/manifest">
  <Identity Name="MyApp"
            Publisher="CN=peterdn.com"
            Version="1.0.0.0" />
  <Properties>
    <DisplayName>MyApp</DisplayName>
    <PublisherDisplayName>peterdn.com</PublisherDisplayName>
    <Logo>Images\Logo.png</Logo>
    <Description>My Cool App</Description>
  </Properties>
  <Prerequisites>
    <OSMinVersion>6.2</OSMinVersion>
    <OSMaxVersionTested>6.2</OSMaxVersionTested>
  </Prerequisites>
  <Resources>
    <Resource Language="en-us" />
  </Resources>
  <Applications>
    <Application Id="MyAppId" 
        Executable="MyApp.exe" 
        EntryPoint="MyApp.App">
        <VisualElements 
            DisplayName="MyApp"
            Logo="Images\Logo.png"
            SmallLogo="Images\SmallLogo.png"
            Description="My Cool App"
            ForegroundText="light"
            BackgroundColor="#222222">
            <SplashScreen Image="Images\SplashScreen.png" />
        </VisualElements>
    </Application>
  </Applications>
</Package>

Most of this is pretty self-explanatory.  However, you should change the Name and Publisher attributes in the <Identity> tag to your own values.  As far as I can tell, Name can be anything you want (I chose a GUID because that’s what Visual Studio does).  However, Publisher must be the name of the root certificate that you will generate later.  If these do not correspond, you will not be able to sign the app.  Choose something sensible for the moment, such as “CN=my.domain.com”.

Also, if you changed the class or namespace names in the C# source above, don’t forget to update the EntryPoint attribute in <Application>.

Compiling

The basic operation of the C# compiler has not changed much in this new release, however there are a couple of interesting things that you should be aware of when developing a Metro app.  Firstly, a new build target has been introduced – /target:appcontainerexe.  Specifying this target informs the C# compiler to generate an executable that runs in the context of an AppContainer (i.e. the Metro UI ‘shell’).  Secondly, WinRT APIs are accessed through referencing .winmd metadata files.  These have the same format as .NET assemblies, and can therefore be viewed in your favourite reflector tool or disassembler.  On my install, these are located in the “C:\Program Files (x86)\Windows Kits\8.0\Windows Metadata” directory.  With a quick peek at the #using directives in the code above, and some trial and error with csc.exe, we can quickly work out the required references.  So, conjure up a Visual Studio 11 command prompt and run the following (superfluous newlines inserted for dramatic effect):

csc.exe /target:appcontainerexe 
        /out:MyApp.exe 
        /r:"C:\Program Files (x86)\Windows Kits\8.0\Windows Metadata\windows.applicationmodel.activation.winmd" 
        /r:"C:\Program Files (x86)\Windows Kits\8.0\Windows Metadata\windows.ui.xaml.winmd" 
        /r:"C:\Program Files (x86)\Windows Kits\8.0\Windows Metadata\windows.ui.xaml.media.winmd" 
        /r:"C:\Program Files (x86)\Windows Kits\8.0\Windows Metadata\windows.ui.xaml.controls.winmd" 
        /r:"C:\Program Files (x86)\Windows Kits\8.0\Windows Metadata\windows.ui.xaml.controls.primitives.winmd" 
        MyApp.cs

Packaging

Next, we build a simple directory structure for the package.  Create a new directory and copy MyApp.exe and AppxManifest.xml to its root.  Either create your own Logo.png, SmallLogo.png, and SplashScreen.png image resources (as specified in AppxManifest.xml), or use mine.  Place these images in an “Images” subdirectory.  Your structure should look something like this:

  • AppxManifest.xml
  • MyApp.exe
  • Images/
    • Logo.png
    • SmallLogo.png
    • SplashScreen.png

Now we pack up the directory into .appx form.  This is actually just standard ZIP format, however we will prefer to use the new MakeAppx.exe tool (rather than WinZip) to create and unpack packages.  Hit the command prompt again and run:

MakeAppx.exe pack /d .\output /p MyApp.appx

If your AppxManifest.xml has any glaring problems, they should be detected at this stage.  Pay attention to any error messages and double check your copypasta.

WARNING: By default, everything in the “output” directory will be added to the package.  Make sure that it only contains files you want to distribute.

Signing

Signing is a crucial stage of the process. The Metro environment in Windows 8 is much more strict and security-conscious than the desktop environment. Every application must be signed by a trusted entity before the system will allow it to be installed. Presumably, this will be done by Visual Studio when you deploy a package for release and upload it to the Windows App Store. Even during testing, Visual Studio creates temporary certificates for your apps that allow them to run on your development machine.  If we are to do this without the aid of Visual Studio, we first need to create a trusted root certificate, then sign the package using a client certificate.

Generating a Trusted Certificate

To the command prompt!

makecert.exe -n "CN=peterdn.com" -r -a sha1 -sv peterdn.com.pvk peterdn.com.cer –ss root

If you haven’t a clue what’s going on here, simply understand that this generates the public and private key components of a root certificate, and stores them in the files “peterdn.com.cer” and “peterdn.com.pvk”, respectively.  The filenames you choose don’t really matter, but you are probably best naming them after your own domain.  What is important is the certificate name, specified here in the parameter “CN=peterdn.com”.  Change this to your domain, but also remember to mirror the change in the <Identity> element of your AppxManifest.xml, as mentioned previously.  Repackage if necessary.

IMPORTANT: The above command installs the certificate into Windows’ trusted root store, meaning Windows will implicitly trust anything that is signed by it.  This obviously poses a huge security risk.  It is highly recommended that you keep the private key file (.pvk) in a very safe place, and remove the certificate from your root store when you are finished with it.  This can be done from the command-line with certutil.exe, or via GUI with certmgr.exe (where the certificate will be listed under the “Trusted Root Certification Authorities” tab).

Now we generate a client certificate that we use to sign the package.  This certificate is in turn signed by the root certificate in order to establish the required chain of trust.  Run the following command (again, substituting in your own domain name where appropriate):

makecert -a sha1 -sk "peterdn.com" -iv peterdn.com.pvk -n "CN=peterdn.com" -ic peterdn.com.cer -sr currentuser -ss My

This automatically installs the client certificate into your personal store.

Signing the Package

We need to obtain the thumbprint of the client certificate we just created.  Powershell niftily allows us to navigate and explore the certificate store from the command-line.  Run the following command and make a note of the appropriate thumbprint (mine shown below):

PS C:\> dir cert:\CurrentUser\My


    Directory: Microsoft.PowerShell.Security\Certificate::CurrentUser\My


Thumbprint                                Subject
----------                                -------
75DB2ACF57A1BC2DBBF239BBD0FB143F91771103  CN=peterdn.com

Now back to the command prompt.  Sign the package, substituting in your own certificate’s thumbprint:

signtool.exe sign /fd sha256 /sha1 0a241ba3a94080f0df32d90dc60358368a4ebce0 MyApp.appx

If you receive an error like “SignTool Error: An unexpected internal error has occurred. Error information: "Error: SignerSign() failed." (-2147024885/0x8007000b)”, you’ve probably not been consistent with your certificate names.  Double check these and try again.

Installing

To install the package, hop on back over to Powershell.  Execute the following cmdlet:

Add-AppxPackage .\MyApp.appx

If successful, the app should now appear somewhere on the start screen as shown:

App tile on start screen

Running the app, we see the gorgeous UI we expect:

Our Metro app UI

Deploying

The ‘encouraged’ way to deploy Metro apps is obviously via the Windows App Store, and the developer preview of Visual Studio includes relevant tools for validating and uploading apps.  However, at the time of writing, the App Store has not yet been enabled and information about it is still scarce.

So how are you going to deploy your awesome in-house Metro app to your hundreds and thousands of corporate employees?  Well, pretty much the way you deployed it on your own machine.  Install your root certificate (in my case, this was the file “peterdn.com.cer”) on the target machines either using the certmgr.exe GUI or by running the command:

certutil.exe -addstore root peterdn.com.cer

Also, please please remember the above disclaimer about the security implications of abusing trusted root certificates.

Your package can then be installed using Powershell as above.

Download


  1. Code and resources

Relevant Links

How to: Install, validate, and upload your package – MSDN article which provides more details.
Lap around the Windows Runtime – Great session from BUILD giving an overview of WinRT.
Windows Runtime API reference – MSDN reference for the WinRT API.
Windows Runtime internals: understanding "Hello World" – Another highly recommended session from BUILD giving a behind-the-scenes look at how WinRT and app installation works.
How to create a package manifest manually – MSDN reference for more details about the appx manifest.
Introduction to code signing – MSDN introduction to digital certificates and signing.

Tags: , ,

Windows 8 | WinRT

Tablet Envy

by peterdn 20. September 2010 15:23

Over the past couple of weeks, it seems like not a day has gone by without another ‘iPad-killer’ Android tablet being announced.  Even a comment by Google’s Hugo Barra (director of products for mobile) that Android 2.2 is not designed for tablets has failed to dampen the enthusiasm for these devices.  I, for one, am most definitely enthused, and unfortunately for my bank account, I really want one.  Also, since I’m quite impatient, I want one right now, please.

In a tragic twist, it would seem that the earliest we will be seeing many of them is early-to-mid October.  The latest offerings by Archos are expected to be available at some point next month, and the massively hyped Samsung Galaxy Tab is slated for a November 1 release by Amazon UK.  It looks like it’ll still be a while before we can get our hands on an Android-powered tablet.

That is, unless you look on eBay.  It turns out that a whole smorgasbord of iPad-clones (of varying degrees of quality) have been available from China for months.  I’ve been keeping my eye on these devices since April, but didn’t seriously consider getting one until fairly recently when I finally succumbed to the tablet craze.  The majority have either a 7" or 10" resistive touchscreen, an ARM-compatible CPU of some kind, between 2-4GB of internal flash memory, and run Android 1.6 or 2.1.  After a bit of looking around, I opted for one of the slightly higher-end 7" models, as it seemed to have decent build quality and looked reasonably fast and stable in the demo videos.  I happily parted with £160 and a week later, it arrived in a black box, complete with USB, Micro-USB, HDMI, and power cables, a set of headphones, and a stylus.  The tablet itself came with a book-cover-like leather case which is actually screwed into it at the back, though it can be easily removed.

Tablet + HTC Hero

The tablet, removed from it’s case, alongside my HTC Hero (and a random pen for scale purposes…)

Firstly, lets chew through the specs:  The nitty-gritty details of this tablet, exactly as reported by the Android System Info app, are as follows:

  • ModelHSG MIDX5A.
  • OS Version – Android 2.1.
  • CPU – 720 MHz Telechips TCC89/91/92XX (ARMv6-compatible chip).
  • Memory – 148MB.
  • Storage – 4MB internal flash.  Expandable via SD card, up to 16GB.
  • Display – 7" WGA (800x480) resistive touchscreen.
  • Sensors – Telechips 3-axis accelerometer.
  • Wifi – 802.11b/g.
  • Ports – 1x USB, 1x Micro-USB, 1x HDMI output.
  • Battery – 2 x 1300mA.
  • Speaker + Microphone.

There are several things to note about these specs.  The tablet was advertised as having an 800MHz CPU, though in reality it scales from between 36-720MHz depending on load.  It was also advertised as having 256MB of RAM, though I’m not sure whether this is just is just plain false information, or whether it’s being reported incorrectly by Android System Info for whatever reason.  The app also reports seeing an accelerometer, though as far as I can tell it does not work in software.  Some sources claim that the accelerometer is not supported in the firmware yet, other sources claim it doesn’t actually have one at all.  I suppose the point is moot anyway, as if it’s not working, it might as well not be there. 

Another thing to note is that the tablet does seems to support USB mass storage and input devices (keyboards, mice), though apparently not much beyond that.  I’ve personally tried a USB mouse and memory stick with it, and they work nicely (having a cursor in Android does feel a bit odd, however).

I like

  1. Build quality is impressive, for the price.  Overall it feels very solid, with a hard plastic case and metal trim around the edges.  The only minor aesthetic qualms I have with it are that the power button is not uniformly flush with the trim, and the SD card needs to be pushed in quite hard for it to click in or out.
  2. The menu, home, and back hardware buttons at the right-hand side are very useful, though as they’re completely flush with the screen at all times, they lack any tactile responsiveness.  Sometimes it’s difficult to tell whether you’ve actually pressed a button, or whether an app is simply being laggy.
  3. Performance is good, overall.  The device feels pretty snappy for most tasks, though there is a bit of screen lag when scrolling through lists and such.  The software itself seems very stable.  It easily manages light web browsing and music playback simultaneously (though the built in music player has a tendancy to skip – try ‘Cubed’ media player instead).  The on-screen keyboard can be a little laggy, though I can live with that when I’m just typing search terms into Google.  Might not be ideal for things like word processing, however.
  4. Battery life is better than expected, managing about 5-6 hours of web browsing over wifi while listening to Last.fm.
  5. Android Market works out-of-the-box.

I dislike

  1. The resistive touchscreen is not the most responsive, and takes a little getting used to after using a capacitive screen.  Multi-touch is not supported, though this isn’t a massive issue.  The screen itself is very bright, but also very reflective, making it hard to view under bright lighting.
  2. Sleep does not work.  The power button at the side just seems to turn the screen off, the result being that the device eats battery life when it is not in use.  It can, of course, be fully powered off, but the 45-second boot time makes this less than ideal.
  3. Screen orientation cannot be changed easily, and as the accelerometer is currently defunct-slash-nonexistent, rotating the device has no effect at all.  The tablet is clearly designed to be used in landscape mode most of the time, and this is how most apps appear on it.  However, certain apps (such as the aforementioned Android System Info) force the screen into portrait mode.  It is annoying that when you then return to the home screen, it remains in portrait mode instead of switching back to landscape.  I’ve found that disabling the automatic orientation switching setting reduces this problem, and apps that can display in landscape will tend to switch to landscape (though the home screen does not).
  4. No Win64 ADB USB drivers, at least that I’ve found.  The manufacturer provides 32-bit drivers for Windows 2000, XP, and Vista/7 only.  Considering I bought this device partly for development purposes, this is a bit of a bummer.  It is possible to run ADB over TCP/IP, though this requires a USB connection to enable.

Thoughts on Android

The tablet runs no-frills Android 2.1, and although it has the same screen resolution as an HTC Desire or Samsung Galaxy, the extra 3 inches makes some things appear a little stretched.  Most apps scale nicely, though in many ways it just feels like a big phone.  This is hardly surprising, considering that, at the moment, the vast majority of apps (and as Google says, the OS itself) are designed for sub-4" displays.  On the whole, however, I’m pleasantly surprised at how well things scale given no modifications, but I’m also looking forward to the time when my apps specifically take advantage of larger displays. 

Home screen screenshot

BBC news website screenshot

Screenshots of the home screen and BBC news website, in landscape mode.

Overall thoughts

It’s a nifty little device for casual web browsing from the couch, and listening to a bit of music, though it definitely has its quirks.  For that reason, although I’m happy with the money I spent, I’m not sure I could recommend it to anyone without a huge disclaimer.  If someone bought this expecting a flawless user experience like I’m sure the iPad provides, they would be sorely disappointed.  The adage that you get what you pay for is certainly true here.  I may or may not buy a Galaxy Tab yet.

It’s time to wrap up.  Eventually I’ll get around to trying and writing about a couple of my own ideas for utilizing all 7" of screen space on Android, as it stands at the moment.  In other news, I’m still excited by MonoDroid and I’m eagerly anticipating the finished product.  Related to that, I’m also looking forward to the release of the first of the Windows Phone 7 devices and the resulting competition and innovation that another platform will bring to the scene.  It’s an exciting time for mobile users and developers alike.

Tags: , ,

Android

Summer of WebKit, Part 2: Printing

by peterdn 29. August 2010 15:57

My main aim for WebKit .NET over the past month or two has been to implement printing.  WebKit Cairo has had printing support since January, and I figured it would be a fairly simple task to hook onto that functionality in the .NET world.  As it turned out, there were one or two issues that had to be worked out first.

Handling _RemotableHandle

Several of the printing methods in the IWebFramePrivate interface (in the WebKit COM API), take a parameter of type HDC, which represents a handle to a graphics device context — in this case, a printer device.  Like all Windows handle types, HDC is defined as (void *), however the handle itself is only 32-bits long and so is sign extended to fit into 64-bits on Win64 (according to this white paper).  Unfortunately, due either to some bug or by design, the MIDL compiler marshals this type as a pointer to some barely documented _RemotableHandle structure which looks like the following:

public struct _RemotableHandle {
    public int fContext;
    public struct __MIDL_IWinTypes_0009 u;
}

public struct __MIDL_IWinTypes_0009 {
    public int hInProc;
    public int hRemote;
}

When run through tlbimp.exe and imported into C#, the type signature for one of these printing functions turns out to be:

public uint getPrintedPageCount(ref _RemotableHandle printDC);

Immediately skeptical, I tried various combinations of setting hInProc and hRemote to the value of the printer device handle, and setting fContext to one of the arcane-looking constants WDT_INPROC_CALL and WDT_REMOTE_CALL, but nothing seemed to work.  My first solution was to modify the relevant IDL files in WebKit itself and change the HDC types to OLE_HANDLE.  This worked well, as OLE_HANDLE is defined as a 32-bit value and MIDL marshals it as an Int32.  After inserting a few casts here and there, my printing code suddenly started to work!

This solution was not ideal, however, for a couple of reasons.  Firstly, it required me to make changes to WebKit each time, including inserting casts to OLE_HANDLE in the all functions I needed.  Secondly, as OLE_HANDLE is always 32-bits long, it was likely to cause headaches when the time came to port WebKit to Win64.  I tried getting around this by using (void *) or a different type of handle, but MIDL either didn’t like this or just marshalled the type as a _RemotableHandle again.  Damn.

The solution I eventually settled on is a bit inelegant, but seems to work as intended.  Leaving WebKit well alone, I ran the generated interop assembly through ildasm.exe, replaced all occurrences of the string “valuetype WebKit.Interop._RemotableHandle&” with “int32”, ilasm.exe’d it back together, and voila, it worked perfectly!  So I wrote a small tool that does this find and replace automatically (UNIX afficiendos should scoff at this point), and lumped it all into the solution’s pre-build event.

Functional Printing

With that sorted, and once I discovered that WebKit measures margins in 1000ths of an inch (a key bit of knowledge), printing functionality began to fall into place.  There are still some minor issues with regards to margin sizing, elements which span multiple pages, and print preview, but for the most part it works.  This functionality is included in the 0.5 release of WebKit .NET.

For the next few weeks I’m planning on focussing on JavaScript-C# interop.  I’ve already started playing around with JavaScriptCore to get a feel for how it works (see my JavaScriptCore Shell on GitHub).  Looks like I’m going to have to dust off my C++/CLI skills for the next installment…

In other random news, I’ve been distracted recently by a invite to the MonoDroid preview.  As would be expected of a pre-beta software release, it’s not perfect yet, but I’ve been very impressed by it so far.  The substantial interest generated in it, the huge amount of activity on the mailing list, and the speed at which the developers respond to and fix issues certainly bodes well for the finished product.  For me, the ability to code for Android from within Visual Studio alone may make it worthwhile.  We’ll see.  I may or may not get around to blogging about it.

Tags: , , ,

WebKit | WebKit .NET

Summer of WebKit, Part 1: Compiling WebKit

by peterdn 19. July 2010 14:52

(Updated December 18 2010.  The build process has changed subtly – updated info is marked in bold.)

With summer upon us, I’m finally getting a bit more time to work on some WebKit and WebKit .NET stuff.  My intention is to make my way through the TODOs in the project roadmap and blog about any interesting bits as I go.  As can be seen, there is a fair amount to be done, hence the ‘Part 1!’

I started, as I usually do, with the mammoth task of compiling WebKit, which inevitably takes much longer than I anticipate.  The exact process seems to change very subtly each time.  Here I’ll describe the exact steps I went though this time (with WebKit revision 63600 or so).  Although mainly for my own reference, it may save someone else a couple of hours if they run into the same problems that I did.  So, to get it building (assuming a build environment is configured as described here, Visual Studio 2008 is installed, and the WebKit source and support libraries have been downloaded):

  1. Grab the dependencies required to build the Cairo port (updated October 2 2010; thanks Brent Fulgham) and extract them somewhere.
  2. Fire up Visual Studio 2008 and open WebKit/win/WebKit.vcproj/WebKit.sln.  Run through the conversion process and finish.  Add the ‘include’ and ‘include/cairo’ directories from the dependencies in the previous step to the Visual C++ include directory list.  Similarly, add the ‘lib’ directory to the Visual C++ libraries directory list.  Close Visual Studio.
  3. Open a cygwin prompt, navigate to the WebKit root directory and run the script (Updated December 18 2010: The WebKitTools directory has been renamed to Tools):
    Tools/Scripts/build-webkit
    This downloads another set of libraries and installs them, along with the support libraries.  As we’ve converted the project to VS 2008, and the script is configured to build with VS 2005, the build will fail, but we can ignore that.  There is probably a more elegant way to install these libraries, but this method suffices for the moment.  No worries there.
  4. Added December 18 2010: It's now also necessary to run the following script to install another set of libraries:
    Tools/Scripts/update-webkit
    This will also update to the latest SVN revision.
  5. Open up WebKit.sln in VS 2008 again and set the build configuration to Release_Cairo (or Debug_Cairo as the case may be).  VC++ 2008 is much more strict than VC++ 2005 so issues more warnings, and by default, the WebKit projects are configured to treat warnings as errors.  In the C++ configuration properties for every project (specifically, JavaScriptCore, WTF, jsc, testapi, WebCore, QTMovieWin, WebKit, and WebKitLib), disable the ‘Treat Warnings as Errors’ option.  Optionally, turn off warnings completely for a smoother build experience (I got over 60,000 on warning level 4 before my machine seized up).
  6. Optionally, disable optimization for JavaScriptCore, WebCore, WebKit, and WebKitLib, but remember to enable again before a release!  This drastically decreased build time for me.
  7. In the WebCore librarian configuration properties, add the following switch to the command line options:
    /expectedoutputsize:3000000000
    This is a workaround for a problem where the linker runs out of memory and issues a “LNK1106: invalid file or disk full: cannot seek to 0x…” error message.  More information can be found in this Microsoft KB article.
  8. Optionally, but required for the DOM access functionality in WebKit .NET 0.4+, replace WebKit/win/DOMCoreClasses.cpp and WebKit/win/DOMCoreClasses.h with these modified versions.  Hopefully I’ll get these changes committed to the actual WebKit repository at some point, but until then you can grab them from here.
  9. We’re pretty much done.  Hit build and go make a cup of tea or do something else constructive for a while.  In my experience, it takes about 45-60 minutes to build from scratch with optimizations disabled, and 2-3 hours with them enabled.

And voila, we (should) have a fully-functioning WebKit Cairo build to use.

As it turns out, I did get enough time to implement basic printing functionality in WebKit .NET (no page setup or print preview yet) as well, but I’ll leave that for part 2 when it’s more complete.  The changes should be up on SVN very soon, and a 0.5 release is well on its way.

Tags: , , ,

WebKit | WebKit .NET

Apple vs. Web Standards

by peterdn 5. June 2010 22:45

There was much excitement in the Twittersphere and elsewhere on the web when Apple launched it’s HTML5 showcase yesterday, the purported intention being to spread awareness of the upcoming features of HTML5 + CSS3.  In classic Apple fashion it succeeds at this quite well—assuming you are using their very own product.  In fact, the page goes as far as to misleadingly imply that Safari is the only HTML5-ready browser, and also prevents other (equally capable, if not more so) browsers from viewing the demos.*  I tweeted a somewhat tongue-in-cheek screenshot (see below) yesterday showing both Chrome and Safari displaying “The HTML5 Test” results and Apple’s HTML5 showcase, which sums up the deception nicely.  Additionally, the page headline and the blurb about “web standards” makes the whole thing even more ironic. 

This particular view on the story was picked up by a several tech news sites including TechCrunch and OSnews, and by a number of bloggers, notably the Guardian technology blog.  I won’t reiterate what has already been said by these guys, other than to jump on the bandwagon and berate Apple for its blatant hypocrisy and misuse of the term “web standards”.  If Steve Jobs really wants the likes of Flash to die, he may have to rethink his strategy.

Chrome vs Safari

Screenshot showing Chrome 5.0.375.55 and Safari 4.0.5 (531.22.7) (on Windows) displaying the results of the HTML5 test and the Apple HTML5 showcase

 

* At the time of writing, the HTML5 Showcase still blocks Google Chrome.  However, the exact same demos can be found on the Apple developer site where browser-sniffing is apparently disabled.

Some other vaguely relevant and interesting links:

Intellectual Honesty and HTML5 - Mozilla’s Chris Blizzard’s opinions.
Internet Explorer 9 Test Drive – A similar offering from Microsoft, showing off the (limited) HTML5 / CSS3 features that are to be supported by IE9.  Most of the demos seem to work in Chrome.
HTML5 Gallery – A showcase of “real” sites using HTML5.

Tags: , , ,

HTML5