A JAM file as used in GP2 is an archive of images that can be used within the F1GP2 simulator. These images are typically used for car liveries, adverts, scenery textures, road markings, buildings, and so on. They are also used to store 3D texture mappings for the car shapes in the form of sprites (which explains why removing car textures was never an option!).
The purpose of this document is to provide information regarding the format of Grand Prix 2 'JAM' files. JAM files are used in GP2 to store textures used in tracks as adverts, scenery and so on, as described later in this file. The information in this primer was originally passed to Paul Hoad by Trevor Kellaway and others in the GP2 community.
The need for an editor for JAM files becomes self-evident as soon as you start creating your own textures for use in your tracks. While Trevor Kellaway's excellent GP2JAM program provides us with most of what we need to create new JAM files, there are still some things it can't do. The Jam Editor project was started by Paul Hoad in an attempt to fill the gaps left by GP2JAM and is continued today, as an Open Source development, by various members of the GP2 community.
As you read through this file, you will probably find it necessary to refer to the Glossary. There are a few key terms defined in this file that I'd like to become standard terms used when talking about JAMs. Key among them are the words canvas and texture.
The Jam file itself is encrypted and must be decrypted before it can
be altered in any editor. The encryption follows a fairly simple algorithm,
making repeated use of an XOR operation. (The algorithm was originally
passed to Paul Hoad by Frank
Ahnert, the maker of gp2hipic and an all-round good guy.) The following
pseudocode explains the algorithm (precise details can seen in the source
Structure of a JAM file
Well, a JAM file consists of the following elements:
The JAM and texture headers are loaded into the current Jam Editor (25th Jan 99) as the following structures:
The JAM_HEADER is read from the file first, so that we know how many texture headers we need to load. This is done by the CJam class. Then, for each of the m_wNumTextures, we create a new CJamTexture object and load the next JAM_TEXTURE into it. Next we load the for local palettes into each of the textures we have created. Finally, all of the canvas pixels are loaded and are passed to the texture objects for them to extract their own area of pixels.
You will have noticed that there are still many unknowns in the JAM_TEXTURE structure - I don't know what they hold. Personally, I don't even know how Paul/whoever knows that they are of the form given (i.e. WORDs vs. BYTEs). The unknowns could be something like the transparency color, memory location at which they should be loaded, etc. Who knows? They're just guesses!
|Palettes and pixels
There is a lot of important information in this section, so hold tight - we're going in... :-)
The colours used to display a texture's pixels in GP2 are determined by three things:
This is where things start getting complicated. At this point, I need to introduce a new term. A 'decoded texture' refers to the pixels that comprise a texture after it has been displayed using one of the texture's 4 palettes. Sorry I couldn't think of a better name for it than that. At least it fits in with the CJamTexture::DecodePixels function in the source code.
You will notice in the local palettes that many of the entries are duplicates of other values in the palette. There are very good reasons for this.
The key to understanding the palettes is realising that the single array of pixels in the canvas has to be able to produce 4 different bitmaps. This means that any given pixel in the canvas may be converted to a different colour in each of the decoded textures. Moreover, two pixels which have the same colour in decoded texture #1 might not have the same colours in decoded texture #2. This is very, very important. The consequence of this is that we must make these two pixels refer to different entries in the local palettes if they are to appear the same in decoded texture #1, but different in decoded texture #2.
I hope you followed that. :-) It explains why there are so many repeated numbers in the local palettes - they are not identical in all of the four local palettes. Furthermore, we can deduce from this that the size of the local palette required for any given texture is entirely dependent upon how the 'hazing' affects the number of unique colours in each of its decoded textures.
To study the hazing effect produced by the local palettes, we'll take the MCOJAMS\RASSC.JAM as a case study. Specifically, we'll be looking at only the first texture defined in this file. This texture can be seen here in each of its decoded forms, magnified to 4 times its usual size
|At first sight, you may think (as I did) that the creator of a new
JAM file would have to actually create each of these four bitmaps himself
for import into the Jam Editor. The Jam Editor would then use these bitmaps
to determine what values the local palette entries should have.
However, looking at the decoded textures more closely, we can see that each successive one appears to be a blurred ('hazed') version of the one produced by the previous palette. Also, we can see that no new colours are being produced by this hazing effect - indeed, the number of colours is actually decreasing. These effects are characteristic of median filtering (which you've probably seen in PSP). If you don't believe me, try applying a median filter to the Palette 1 picture shown above. The result is very similar to the Palette 2 picture - the reason for the difference will be down to the 'size' of the filter applied to the picture (there's no way to change this in my version of PSP).
For more information on Median Filters, follow this link.
The similarity of the hazing effect to median filtering means that it will probably be possible to produce all four decoded textures automatically from a single bitmap. This is, of course, great news for you, the user. :-) From these decoded textures we can then work backwards to produce the local palettes and, eventually, the canvas pixels...
The required size of the local palettes will be given by the number
of unique pixel quartets (DT = decoded texture):
Each of these pixel quartets will then be put into a single position in the local palettes - the first colour in palette 1, the second in palette 2, and so on. Finally, each of the texture's pixels in the canvas will be set to reference the palette entry which represents how that pixel appears in the decoded textures.
Note: for an explanation of how the canvas pixels, local palette and global palette are all linked, see the section on Translating pixel values from the canvas.
A JAM file format is the format in which the graphics used in GP2 are stored in your filesystem.Canvas
The canvas is the rectangular area of pixels stored in the JAM file.Texture
A texture is a single rectangular area of the canvas. Textures can be mapped onto objects and ribbons in your tracking files (no doubt you're familiar with the term 'texture-mapping'). Each JAM file contains one or more textures.Global Palette
GP2 uses a single, predefined palette for displaying all of its graphics (once you're actually in the game proper). I refer to this palette as the 'global palette'. All custom bitmaps to be imported into JAM files must use this palette.Local Palette
Each texture has four local palettes. The pixel values from the canvas refer to entries in these palettes, which in turn refer to colours in the global palette, thus giving the pixels their colours. The local palettes allow the texture to be drawn differently at different distances from the camera.Decoded Texture
A decoded texture is formed of the pixels of a texture after they have been assigned a colour using one of the texture's local palettes.Hazing
|Copyright © 2001 John Verheijen. All Rights Reserved|