Jam Structure

What is a JAM file?

A JAM file as used in GP2 is an archive of images that can be used within the F1GP2 simulator. These images are typically used for car liveries, adverts, scenery textures, road markings, buildings, and so on. They are also used to store 3D texture mappings for the car shapes in the form of sprites (which explains why removing car textures was never an option!).

About this document and the Jam Editor

The purpose of this document is to provide information regarding the format of Grand Prix 2 'JAM' files. JAM files are used in GP2 to store textures used in tracks as adverts, scenery and so on, as described later in this file. The information in this primer was originally passed to Paul Hoad by Trevor Kellaway and others in the GP2 community.

The need for an editor for JAM files becomes self-evident as soon as you start creating your own textures for use in your tracks. While Trevor Kellaway's excellent GP2JAM program provides us with most of what we need to create new JAM files, there are still some things it can't do. The Jam Editor project was started by Paul Hoad in an attempt to fill the gaps left by GP2JAM and is continued today, as an Open Source development, by various members of the GP2 community.

As you read through this file, you will probably find it necessary to refer to the Glossary. There are a few key terms defined in this file that I'd like to become standard terms used when talking about JAMs. Key among them are the words canvas and texture.


The Jam file itself is encrypted and must be decrypted before it can be altered in any editor. The encryption follows a fairly simple algorithm, making repeated use of an XOR operation. (The algorithm was originally passed to Paul Hoad by Frank Ahnert, the maker of gp2hipic and an all-round good guy.) The following pseudocode explains the algorithm (precise details can seen in the source code):
encrypt/decrypt the first four bytes of the file using the pattern 0xB082F165 loop multiply the encryption/decryption pattern by 5 encrypt/decypt the next four bytes using the new pattern until encrypted/decrypted all bytes in the file

Structure of a JAM file

Well, a JAM file consists of the following elements:

    A JAM header containing basic information about the contents of the JAM file
  • A set of texture headers, each containing 
    • the texture ID
    • size and position of the texture within the overall JAM canvas
    • transparency information
    • Miscellaneous other stuff, including some unknowns 
  • A set of 4 palettes (yes, 4) for each texture (used to colour the object differently as it moves into the distance - a 'hazing' effect)
  • A number of bytes representing a 2D array of pixels, the array being of width 256 and a height given in the JAM header 
The JAM and Texture headers

The JAM and texture headers are loaded into the current Jam Editor (25th Jan 99) as the following structures:

typedef struct
{ WORD m_wNumTextures; // number of textures defined in JAM
WORD m_wJamImageHeight; // total height of JAM image in pixels
} JAM_HEADER; typedef struct { BYTE m_byLeft; // left of texture within image
BYTE m_byTop; // top of texture within image
WORD m_wUnk02;
WORD m_wWidth; // width of texture within image
WORD m_wHeight; // height of texture within image
WORD m_wUnk08;
WORD m_wUnk0A;
WORD m_wImagePtr; // offset to image data in JAM file?
WORD m_wUnk0E;
WORD m_wQuarterPaletteSize; // size of each of our 4 palettes
WORD m_wTextureID; // the all-important texture ID (jam ID)
WORD m_wTransparent; // transparency flag (8=yes, 0=no)
BYTE m_byUnk16;
BYTE m_byUnk17;
BYTE m_byUnk18[8];

The JAM_HEADER is read from the file first, so that we know how many texture headers we need to load. This is done by the CJam class. Then, for each of the m_wNumTextures, we create a new CJamTexture object and load the next JAM_TEXTURE into it. Next we load the for local palettes into each of the textures we have created. Finally, all of the canvas pixels are loaded and are passed to the texture objects for them to extract their own area of pixels.


You will have noticed that there are still many unknowns in the JAM_TEXTURE structure - I don't know what they hold. Personally, I don't even know how Paul/whoever knows that they are of the form given (i.e. WORDs vs. BYTEs). The unknowns could be something like the transparency color, memory location at which they should be loaded, etc. Who knows? They're just guesses!

Palettes and pixels

There is a lot of important information in this section, so hold tight - we're going in... :-)

Translating pixel values from the canvas

The colours used to display a texture's pixels in GP2 are determined by three things:

    the value of the pixel in the canvas;
  1. the local palette being used to display the texture (depends upon distance from the camera, palette one being used for nearby textures and palette four being used for distant textures;
  2. the global palette used by all JAM files. 
To obtain the correct pixel colour at any given point in a texture, we use the value of the pixel in the canvas as an index into the local palette. The value at this index in the local palette is then itself used as an index into the global palette.

The implications of having four local palettes

This is where things start getting complicated. At this point, I need to introduce a new term. A 'decoded texture' refers to the pixels that comprise a texture after it has been displayed using one of the texture's 4 palettes. Sorry I couldn't think of a better name for it than that. At least it fits in with the CJamTexture::DecodePixels function in the source code.

You will notice in the local palettes that many of the entries are duplicates of other values in the palette. There are very good reasons for this.

The key to understanding the palettes is realising that the single array of pixels in the canvas has to be able to produce 4 different bitmaps. This means that any given pixel in the canvas may be converted to a different colour in each of the decoded textures. Moreover, two pixels which have the same colour in decoded texture #1 might not have the same colours in decoded texture #2. This is very, very important. The consequence of this is that we must make these two pixels refer to different entries in the local palettes if they are to appear the same in decoded texture #1, but different in decoded texture #2.

I hope you followed that. :-) It explains why there are so many repeated numbers in the local palettes - they are not identical in all of the four local palettes. Furthermore, we can deduce from this that the size of the local palette required for any given texture is entirely dependent upon how the 'hazing' affects the number of unique colours in each of its decoded textures.

The hazing effect of the local palettes

To study the hazing effect produced by the local palettes, we'll take the MCOJAMS\RASSC.JAM as a case study. Specifically, we'll be looking at only the first texture defined in this file. This texture can be seen here in each of its decoded forms, magnified to 4 times its usual size

Palette 1
Palette 2
Palette 3
Palette 4
At first sight, you may think (as I did) that the creator of a new JAM file would have to actually create each of these four bitmaps himself for import into the Jam Editor. The Jam Editor would then use these bitmaps to determine what values the local palette entries should have.

However, looking at the decoded textures more closely, we can see that each successive one appears to be a blurred ('hazed') version of the one produced by the previous palette. Also, we can see that no new colours are being produced by this hazing effect - indeed, the number of colours is actually decreasing. These effects are characteristic of median filtering (which you've probably seen in PSP). If you don't believe me, try applying a median filter to the Palette 1 picture shown above. The result is very similar to the Palette 2 picture - the reason for the difference will be down to the 'size' of the filter applied to the picture (there's no way to change this in my version of PSP).

For more information on Median Filters, follow this link.

Importing new bitmaps

The similarity of the hazing effect to median filtering means that it will probably be possible to produce all four decoded textures automatically from a single bitmap. This is, of course, great news for you, the user. :-) From these decoded textures we can then work backwards to produce the local palettes and, eventually, the canvas pixels...

The required size of the local palettes will be given by the number of unique pixel quartets (DT = decoded texture):
pixel quartet = {colour in DT #1, colour in DT #2, colour in DT #3, colour in DT #4} 

Each of these pixel quartets will then be put into a single position in the local palettes - the first colour in palette 1, the second in palette 2, and so on. Finally, each of the texture's pixels in the canvas will be set to reference the palette entry which represents how that pixel appears in the decoded textures.


Note: for an explanation of how the canvas pixels, local palette and global palette are all linked, see the section on Translating pixel values from the canvas.


A JAM file format is the format in which the graphics used in GP2 are stored in your filesystem.
The canvas is the rectangular area of pixels stored in the JAM file.
A texture is a single rectangular area of the canvas. Textures can be mapped onto objects and ribbons in your tracking files (no doubt you're familiar with the term 'texture-mapping'). Each JAM file contains one or more textures.
Global Palette
GP2 uses a single, predefined palette for displaying all of its graphics (once you're actually in the game proper). I refer to this palette as the 'global palette'. All custom bitmaps to be imported into JAM files must use this palette.
Local Palette
Each texture has four local palettes. The pixel values from the canvas refer to entries in these palettes, which in turn refer to colours in the global palette, thus giving the pixels their colours. The local palettes allow the texture to be drawn differently at different distances from the camera.
Decoded Texture
A decoded texture is formed of the pixels of a texture after they have been assigned a colour using one of the texture's local palettes.
Hazing is the effect used in the JAMs that came with GP2 for showing textures at a distance. The effect appears to be produced by repeatedly applying a median filter to theoriginal image. For more info, see the section on The hazing effect of the local palettes.
Copyright © 2001 John Verheijen. All Rights Reserved