Poser – the program (9 Workflow)

Workflow considerations

Everyone involved in 3D is aware of the simple relationship between image size and quality on one hand, and system resource (Ram, Cpu) usage and rendering time on the other hand: they all go up, or down, simultaneously. This calls for a stepwise refining workflow: gradually go up, while getting the maximum benefits from the fast modes.

Of course everyone has his or her own way of work, but it never hurts to give this look-before-you-leap approach a second thought.

Step 1 Rough Design

A picture tells a thousand words, so it won’t harm me to write down the goal I want to accomplish in less than hundred.

Then I set the image size or at least its proportions (aspect ratio), create the (Dolly) camera which is going to take the shots, build the scenery including props and put the figures where I want them. I do so in low resolution, like using blocks and other primitives instead of the real details, using copies of Andy instead of the full dressed Vickys and so on. Now I should be able to get the camera settings right, and to establish the lighting setup. Almost all details of cameras and light could be established in this phase already.

Figures, props and scene elements can be assigned the right basic surface properties, especially color, as well as location and body pose. No details, which is why Andy has no expressions and no fingers, but does have the full rigging (bone structure) of a hires character like Vicky. Note: Andy2 (Poser 9 / Poser Pro 2012) does have fingers too.

To find out the basic impression of the image (does it communicate the message?), I set and switch the display style of the whole Document, of each figure individually or even of specific figure elements. Each of those can be set in the Display menu:

All those can be switched from the Document Display Style panel:

And I can choose what is going to be affected as well:


The ones I use most are:

  • Outline (Ctrl+2), literally to draw the line of the objects whereabouts, works as a Document style
  • Silhouette (Ctrl+1), to check whether poses do communicate, works on Figures but hardly on the whole scene
  • Cartoon with Line (Ctrl+7), is just a more advanced and somewhat shadowed silhouette
  • Smooth Shaded (Ctrl+8), shows the base surface color without textures and so gives a quick color impression

The default Texture Shaded (Ctrl+9) actually becomes relevant after texturing, and usually after bringing in the real (and textured) scene elements, figures and clothes.

This “phase one” approach becomes especially relevant when building animations. Camera paths, focal changes, following spots and alternating lighting strengths, the effects of major color shifts, finding out whether objects stand in the way but also most pose changes, body moves and collision risks too can be addressed.

Next to that, this “phase one” scene or animation is a very good starting point to test the overall workflow. Post-processing and Photoshop interfacing, movie frame production and video editing, syncing animation with sound and music, deploying Pose2Lux and LuxRender for high end results, eventual Background, Queue and Network rendering, establishing the proper project folder setup. Whatever I’m going to accomplish: I get an end-to-end test early on the easiest ‘this took me a day’ scene around, instead of on a fully loaded and fine detailed ‘took me four months’ result. Because fixing thinks also requires far less time.

During this phase, the default render setting is fine: just casting shadows, no raytracing, no options checked. And if I want to test my IDL / IBL / Global Illumination / SkyDome setting: I use a very low value for Irradiance caching.

Step 2 Fine design

Well, I gradually improve the quality and detail in the scene. I replace the primitive shapes by the final scene elements, replace the Andys by Vickys or alike, give them clothes, replace base colors by textures, bump / displacement maps and whatever, start deploying dynamic hair and cloth and tweak animations, camera settings and light sets accordingly. Just step by step.

While doing so, I keep on testing the major pieces of the remaining workflow, especially those that are apt to break due to newly added elements. When I know up to which version of the project everything was functioning, fault finding becomes much easier than on the final result without intermediate steps. I also check the settings for new materials and texture maps after replacing objects and assigning material sets. Especially when using Poser Pro 2010 and up, the Gamma settings and color swatches need attention. This is dealt with in detail in my separate Understanding Corrections tutorial.

While improving on the project details, also the render quality can be improved upon. The Auto-settings go from 1 to 9 – numbers are not visible though. Draft equals 2, Default equals 4 and Final equals 8 which is fine for all electronic publishing (eBooks, DVD, web gallery, …). The extreme setting (9) might have some use for fine print only (real world exposition gallery print result, art magazine) as it sort of doubles the resolution (level of detail), which cause a four times or more longer render time.

At the same time, I start on checking the render options. Smooth polygons (increases render time somewhat), use displacement maps – in my materials as well (this really might be a RAM killer!!), apply depth of field (focal blur) and 3D motion blur when appropriate (these might about double render time), and eventually apply a post filter sharpening the bump and texture details.

The manual render settings let me also:

  • Increase raytrace bounces, to handle extra reflection-in-reflection (having the Silver Surfer as a figure) or refraction-on-refraction (looking through a series of glass objects) in the scene.
  • Switching on Indirect Light, and set the quality
  • Reduce Irradiance caching, Pixel samples or increase Shading rate to reduce render times while remain quality by the other settings
  • Increase bucket size (say doubling) which will give a slight reduction in render time (say 5%) while requiring somewhat more ram (5% as well), but it mainly works out for a large image and a limited number of threads as the render is only finished till the last thread is done. Honestly, I never do it.
  • Adjust Displacement bounds, when the use of displacement maps is checked. Note that I have to fill the Displacement slots in the materials as well !

One way to achieve high quality test results fast is to deploy area rendering: just rendering a small portion of the scene. This works for investigating texturing or lighting details, which are not easily seen in the preview. Think for example of reflection and refraction, and displacements and shadowing of it.

Another practical way is to test any post processing on low resolution results. Layering, color correction, masking and the like can be set for lowres renders first, and just need refinement later. But this saves me the burden of handling tens of layers and settings with 7000×5000 images.

Poser – the program (X Professional)

Professional considerations

One day while visiting the Poser forum on Renderosity I ran into the following dialogue:

A: given my scene lighting (…), how can I get the proper shadows between the car/tires and the ground?
B: just paint them in, using Photoshop
A: that’s cheating!

The other day I visited one of my favorite galleries of a guy photographing fashion models. He wrote:

I do like to try to evoke an expression that is often not a young model’s natural go to look. Sometimes it works and other times it requires tweaking in Photoshop. Funny side note – there are at least 3 images in various places online where a model is getting comments referring to her amazing expression when most of the expression is the result of my application of the liquefy filter… .

Believe me, if I want to become the utmost Poser guru on Earth, the first route is the one to take. Never cheat anything. But if I’m on a budget, on the deadline, or just want to make a series of interesting images during my holiday leave instead of one single perfect render in my whole life, the second route is more profitable. That is what matte background painters do. That is what the special effects guys (and girls) are for. Professionals cheat, they do so knowledgably, and all pro software gives it the full support it deserves. Poser too offers some support, which can be extended using plugins like Advanced Render Settings.

In general, the golden rule for professionals is: never do in 3D what can be done in 2D, aka: invest in pre- and post production. For instance, in scene and object creation:

  • Background images instead of complete full depth 3D scenes
  • Billboard or simple texture mapped block objects instead of full detail 3D ones
  • Bump (at larger distance) and Displacement (shorter distance) textures instead of full detail 3D modeling

And in post

  • Separate renders for Color (Beauty pass, Diffuse), Shine (Specularity), Reflection and the like
  • Separate renders for (groups of) lights / radiosity, and of shadows / ambient occlusion
  • Separate renders for masking objects and material zones
  • No render at all but just a 3D import into Photoshop
  • And finally blending and adjusting image layers in Photoshop

The main reasons are

  • During design, it’s far more flexible. I, or any client, might develop new ideas on the spot when viewing the first concepts. This way it’s easy to create and annotate variations, and to evaluate them on the go. The result is a more robust evaluated concept which has a better change making it to the end.
  • In 3D, I can tweak shadows and highlights forever, and when I’m deploying IDL at print-size (3500×2500 magazine or even 7000×5000 poster) image format waiting another 24 hours for my tenth ‘final’ render is not something I’m looking forward to. Photoshop is just blazingly faster and much more interactive.
  • In 3D, some things just don’t work out unless I spend hours getting it sorted. Conforming clothes which don’t fully wrinkle, Dynamic clothes with a little poke-through, small areas without a shadow, reflections being too strong or too sharp, you name it. Everything that can go wrong, will. Especially when I’m on a deadline.

Surprisingly, this holds even more for hobbyists than pros. Most amateurs have a regular day-job, kids or whatever which leaves them with just an hour a day for their project. They also lack training, and a senior staff member teaching them the tricks of the trade. And while they might not have clients, they might have mental deadlines. Images which should be ready as a birthday gift. Images which should be done so they can move on to the next idea they got recently.

Remember, a lot of pro photographs are post processed for various reasons. Renders are just virtual photographs. So why not playing a similar game? So instead of mastering Poser, one should first master character, fashion or nude photography, and post processing techniques. The best arrangement for Material Room nodes or the best dial settings in Cloth Room might be a lesser concern. That’s my opinion, at least.

As I said, Advanced Render Settings will be your friend, but Poser itself offers some tools too.

  • Render A with Cast Shadows unchecked, and render B with both Cast Shadows and Shadows Only checked.
    Add image B in Photoshop on top of layer A, set B in multiply mode and adjust its strength. Now I’ve got full interactive control over my shadows.
  • Render without and with Gamma Correction switched on (Poser Pro only) and blend the results.
  • Render without and with Depth of Field checked, blend the results.
  • Render with groups of lights on / off, and add the results while varying the contribution of each (can be tedious with many lights)
  • And so on. What can result from interactively blending with and without Raytracing, with and without InDirect Light, with and without Ambient Occlusion on the lights? Photoshop layers can be subtracted to get the net effect only, which in turn can be blurred, or brightened, darkened, contrasted, etcetera.

Poser – the program (Z Appendix)

Appendix Poser.ini settings

This file hosts a lot of settings. Some of them can be changed by the user, via a preferences or settings dialogue. Some settings are just internal (colors, pane and window size and position, coms ports), some are for testing purposes (all sorts of logging), some make a legacy impression to me. This chapter is not completed yet, and might grow under Poser updates. More details are welcome.

BOUNDING_BOXES 1 Menu Display \ Tracking
DEPTH_CUE 0 Preview pane & Display menu
SHOW_LIGHTS 0 Hierachy Editor
DRAW_OFFSCREEN 1 Render Settings \ Preview \ Display Engine
BEND_BODY_PARTS 1 Menu Display
BACKFACE_CULL 0 Render Settings \ Remove Backfacing Polys
BACKGROUND_COLOR 22527 20479 17917
FOREGROUND_COLOR 16000 16000 16000
UI_BACKGROUND_COLOR 54227 51143 47288 Content of the BG Color node
UI_BACKGROUND_IMAGE 0 File \ Import \ Background Image
QUATERNION_ON 0 Menu Animation \ Quaternion Interpolation
USE_META_UI 1 Use settings from XML file for UI definition details
LAUNCH_PREFERRED 0 General Preferences \ Document \ Launch behavior
UI_PREVIOUS 1 General Preferences \ Interface \ Launch behavior
LAST_PROJ_GUIDE “” Menu Window \ Project Guide
PROJ_GUIDE_PALETTE_SHOW 0 Menu Window \ Project Guide
DONT_SHOW_QUICK_START_DIALOG 1 Menu Window \ Quick Start (front page: Don’t show…)
PYTHON_EDITOR_PATH “” General prefernces \ Misc \ Python
USE_COMPRESSION 1 General preferences \ Misc \ Save Files
HTML_WINDOW_LOC 240 120 681 642
UNIT_SCALE_FACTOR 2.621280 General preferences \ Interface \ Display – Units (@ meters)
DEFAULT_CREASE_ANGLE 80.000000 General preferences \ Document \ Smoothing
UNIT_SCALE_TYPE 5 General preferences \ Interface \ Display – Units
USE_OPENGL 1 Menu Display \ Preview Drawing + Render Settings \ Preview
OGL_ALLOW_PBUFFERS 0 Render Settings \ Preview
USE_EXTERNAL_BINARYMORPH 1 General preferences \ Misc \ Save Files
MTL_VIEW_SIMPLE_ADVANCED 1 Open material Room in Advanced Mode
CACHED_RENDERS_MAX 25 General Preferences \ Render \ Cache
CHECK_FOR_UPDATES_ON_LAUNCH 1 General Preferences \ Misc \ Software Update
FIGURE_CIRCLE 1 Menu Display \ Figure Circle
TABLET_MODE 0 General Preferences \ Interface \ Mouse Input
RECENT_FILE “…” Filled on the fly, max 10 entries
RECENT_FILE “…” … ditto
DO_UNIVERSALPOSE 1 General Preferences \ Library \ Pose Sets
HARDWARE_SHADING 0 Render Settings \ Preview \ Enable HW Shading
ENABLE_HARDWARE_SHADOWS 0 Render Settings \ preview \ Enable HW Shading
PREVIEW_TEXTURE_SIZE 512 Render settings \ Preview \ Texture Display
PREVIEW_TRANSPARENCY_LIMIT 1 Render settings \ Preview \ Transparency Display
PREVIEW_TRANSPARENCY_LIMIT_TO 90.000000 Render settings \ Preview \ Transparency Display
CACHED_COMMANDS_MAX 100 General prefernces \ Document \ Undo-Redo
RENDER_IN_SEPARATE_PROCESS 1 General Preferences \ Render \ Render process
FFRENDER_PROCESS_PORT 4414 Communication Poser ó FFRender process
RENDER_THREADS 12 General Preferences \ Render \ Render process
FILE_SEARCH_POLICY 2 General preferences \ Library \ File Search
TEMP_PATH B:\Appdata\Temp\Poser Pro\9 General Preferences \ Misc \ Temp files
LIBRARY_IS_AIR 1 0 for Flash, 1 for Air when available
NO_LIBRARY 0 General preferences \ Library \ Launch behavior
FOREGROUND_POSER_ON_LIB_LOAD 1 General preferences \ Library \ Launch behavior
MULTITHREADED_BENDING 1 General Preferences \ Document \ Optimizations
CONTENT_INSTALL_RUNTIME D:\Content\PPro2012\Downloads\Runtime\libraries
UI_COLOR_SCHEME standard.xml
TEXTURE_DISK_CACHE_SIZE 500 General preferences \ Render \ Texture caching
CACHE_TEXTURES_IN_BACKGROUND 1 General preferences \ Render \ Texture caching


Breaking the 2Gb Barrier? (1 Introduction)

Enough physical RAM for all your simultaneous programs, and enough user memory for one single program, really are separate things. Solving one will not solve the other.

Download the LaaTiDo service program discussed in this tutorial, as well as this tutorial in PDF format (4Mb).

Running out of Memory

When you’re running a 32-bit program (note 1), you might experience “running out of memory” events (note 2). It can happen especially when rendering 3D scenes, or rendering media output for video or music. This can make the program crash, it can make it loose functionality, or it can cripple the results. You may overcome this problem to some extent (note 3), by adjusting the program itself (note 4).


  • 1. when you’re not sure that your program is 64-bit, then it’s most probably 32-bit
  • 2. see the Monitoring User Memory section in this tutorial
  • 3. by default, each 32-bit program is granted 2Gb User Memory. This can be increased up to 3Gb. That’s it. In case you need more, you’ve got to go for 64-bit programs and thus a 64-bit Operating System as well.
  • 4. see the Raising Large Address Awareness section in this tutorial.

Note that Poser 8, Daz Studio 3, Carrara 7 and up might have their Large Address Awareness already raised by the supplier and do not require further enhancement. Vue has not raised LAA and does need your attention.

When you’re running the program in a 32-bit Windows environment (note 5), you will have to adjust the Windows system settings as well (note 6). This comes at a price. By assigning more memory to user programs, there is less available for system routines. This may slow down some operations, like massive data transfers between disks, or over the network. When you’re running a 32-bit program in a 64-bit environment, only the program itself might need an adjustment (note 7).


  • 5. when you’re not sure that your Windows is 64-bit, then it’s most probably 32-bit. 64-bit Windows versions exist for: XP Pro, Vista and Win7 Home Premium, Professional and Ultimate
  • 6. see the Enabling Large Address Usage section in this tutorial. Note that Mac, Linux, and 64-bit Windows environments do have Large Address Usage enabled by default, so it’s a 32-bit Windows thing only.
  • 7. see the Raising Large Address Awareness section in this tutorial

The important thing in this is that’s all about program and system settings. Running out of memory has NOTHING to do with the amount of physical RAM in your box, so increasing or decreasing RAM might bring performance effects (note 8), but will NOT affect the issue above.


Next >

Breaking the 2Gb Barrier? (2 Monitoring)

Monitoring User Memory

With a Right-click of your mouse on the Taskbar, you can open Taskmanager.

The Processes tab will show CPU- and memory usage, amongst others.

Click twice on the Memory bar on top to get the most hungry program on top.

The memory shown is the User Memory, which for 32-bit programs should not exceed 2Gb unless the measures are taken which are described in this tutorial.

Next to User Memory, there is something like System Memory. Those taken together make up the total memory usage, as shown in the Performance tab of Taskmanager.

Since the Taskmanager tends to stay on top of all windows, you can see whether the occurrence of some program issues coincide with exceeding the 2Gb boundary. If so, you might profit from the measures described in this tutorial. If not, there is no need to mess around with program and system settings.

Minimizing the Taskmanager will give you some CPU-indicator on the Taskbar. A double click on this icon reopens the Taskmanager. This way, you can work on and have Taskmanager at hand when required.

Next >

Breaking the 2Gb Barrier? (3 Awareness)

Raising Large Address Awareness

Each individual program in a Windows environment (and in other environments as well) can access two kinds of memory; System Memory and User Memory.

System Memory (red in schema) contains the program code and various settings and tables handled by Windows. In this memory area, only Windows can read as well as write (to load the program), while the program itself in that area can read only. This is to protect systems against viruses, against self-modifying code and against other potential threatening program behavior.

User Memory (blue in schema) is the area where the program itself is allowed to read and write, to store and retrieve it’s intermediate values and results from user actions. When the program is up to something massive, this area is blown to pieces.

While each 32-bit program – without any exception – can deal with a maximum 4Gb of memory in total, most programs are created in such a way that 2Gb is the maximum amount of user memory they can handle, even when they are assigned more bij Windows. See the top Exe in the schema, blue and red areas equal in size.

Unless the program is made “Large Address Aware” (or: LAA) in which case those kinds of limits are off. see the bottom two programs in the schema, the blue is larger than the red one.  This can be done at creation time, by the supplier, and happens more and more. It can also be done at production time, by you on your PC, and requires a special piece of software, like “LaaTiDo”.

This program makes slight changes to your software. As this has some risks involved, you make copies of the original first, of course. An issue is, that each time a new version (update, service pack, …) of that software is installed (from upgrading or whatever), you have to deal with the new executable all over again.

The following programs

  • Poser 8
  • Daz Studio 3
  • Carrara 7 and 8
  • all Adobe Elements 9 and CS5

are known to be LAA, and do not require further treatment, while

  • Bryce 7 and earlier
  • all 32-bit Vue variations
  • Photoshop 7
  • Paintshop Pro X2 and earlier

and known NOT to be LAA and do require this enhancement. I’ve no knowledge yet on Poser 7 and before, Daz Studio 2, Photoshop CS4 / Elements 8 and before and the latest PaintShopPro X3. Anyway, for Poser, Daz Studio and Carrara upgrading is a recommended alternative for fiddling with the program’s executable.

How to proceed?

First of all, you have to make sure you need this kind of solution. Is the problem you’re facing caused by the 2Gb User Memory boundary indeed? Read the previous Monitoring User Memory section about it.

To continue, you’ll need the LaaToDo program. Download the LaaTiDo service program discussed in this tutorial, as well as this tutorial in PDF format (4Mb).

Second, you’ve to locate all the relevant executables in your Program Files folder, and copy (not move!) them to a place to work them, like a new folder in your Temp directory.

This is because you (and Windows) don’t want LaaTiDo or any other application to write in your Program Files directly.

Third, start LaaTiDo and open one of those executables for inspection (1), then click the [Check] button (2). Is it Large Address Aware (LAA)?

If so, you’re done, just test other executables. You may find that when a supplier made the program LAA, all relevant executables are so already. When the supplier did not, non of those are and you have to make them so.

Forth, for non-LAA executables (1), you [Check] (2), then [backup] (3) and then click the [Enable] button 94). That’s it, and you’re almost done. Repeating the previous Check (5) will show you that the new executable is LAA aware indeed now.

Finally, you move the adjusted executables AND their respective unchanged copies (!) back to the Program Files environment, overwriting the existing ones. This is manual action, and Windows will ask you for confirmation.

Now you can test your adjusted program, in simple conditions. Does it seem to work? Then you’re fine. Does it fall apart instantly? Then you have to step back, and transfer your backed up unaltered copies onto the altered ones or so, as the LAA progress is not working for them. That means end of the road indeed, stepping towards all 64-bit software is the only way to go now.

When is does seem to work, and you’re in a 64-bit Windows environment, or a 32-bit environment which is set ready before, then you’re done indeed. If not, then all you’ve got to do is to adjust your 32-bit Windows as well. The next section tells you how.

Next >

Breaking the 2Gb Barrier? (4 Enabling)

Enabling Large Address Usage

While each 64-bit operating system is enabled to support Large Address by definition, and various 32-bit operating systems (like most Unix/Linux variants) are enabled by default too, Windows is not.

Not being enabled implies that the halfway split of 2Gb System Memory and 2Gb User Memory still holds even for LAA (Large Address Aware) programs. Non-LAA programs will fall over when requesting more than the 2Gb maximum, LAA-programs will fall over as the non-enabled 32-bit Windows will not fulfill their request. You won’t know the difference. So besides the program being made LAA, you’ll need to enable Large Address Usage in Windows.

This is how, in Vista and up (Win7, …).

Go to Accessories, and open the CommandPromp with Admin rights.

Then you type: bcdedit

and {Enter}. This presents just a readout of your Windows settings. Note the blue arrow, there is no additional info below OptIn.

Then you type: bcdedit /set increaseuserva 2900

and {Enter}.

You can check the result by typing bcdedit {Enter}, this presents just a readout and you’ll find the variable increaseuserva (NB: Increase User Virtual Address) with the 2900 value. You did do the readout first, to check whether there is something set already.

2900 means 2900MB, or almost 3Gb. The value 3072 (MB, or 3Gb exactly) is the maximum to use, but you can go lower as well. The value chosen is your new User Memory limit, MicroSoft disadvices against values below 2800. This is why I pick the middle route. Higher values leave less memory for Systems use, and might effect your systems performance in bulky operations. Note that non-LAA programs will not be effected by any of those adjustments to your Windows environment.

You can use the same bcdedit /set increaseuserva … command for changing values, or use bcdedit /deletevalue increaseuserva to return to the original, nonmodified situation. In which case even LAA-programs will obey the 2Gb limit again.

The Command prompt can be quit with the exit {Enter} command, and the new windows setting will be effective after a restart (!) of your PC.

This is how, in Windows NT (up till XP)

Win7 and Vista offer a startup regime which is an enhancement over the original Windows NT one. Here is the original approach. Enabling LAA is the same for Win NT and above, like Win 2000 and Win XP. The images and code examples below were made from my XP system.

Essentially, you have to extent the Windows startup command by adding /userva=2900 /3gb switches at the end. The /userva switch is optional, when leaving it out, the value defaults to 3072. The /3gb switch is mandatory, and it should be at the end.

You may add a second startup command line, which enables you to select whether or not to run in the extended addressing mode. This gives you an escape route as well: when issues occur, you just restart Windows using the first, original command line.

How to do it:

  • Rightclick My Computer on your desktop, and select Properties.
  • Choose the Advanced tab, and click the third button, about (Re)Startoptions
  • In the options window, click the Edit button
  • This opens the text editor on the startup file (boot.ini), which might read something like

[boot loader]
[operating systems]
multi(0)disk(0)rdisk(0)partition(2)\WINDOWS="Microsoft Windows XP Professional" /fastdetect /NoExecute=OptIn

  • Now just add a copy the last line, change the name between the quotes, and add the extra options. So now you have:

[boot loader]
[operating systems]
multi(0)disk(0)rdisk(0)partition(2)\WINDOWS="Microsoft Windows XP Professional" /fastdetect /NoExecute=OptIn
multi(0)disk(0)rdisk(0)partition(2)\WINDOWS="With 3Gb memory limit" /fastdetect /NoExecute=OptIn /userva=2900 /3gb

  • Save and close the text file
  • Then you might choose this new, second option as the startup default, in the options Window. Then click OK. (NB: you might have to click OK first and reopen the options window again, to make it read the edited boot.ini file and discover the second command line).
  • Click OK to close the Properties window too. Note that you have to restart Windows (!) and pick the correct variant to have the 3Gb feature activated.

At startup, you will now see two systems to pick from, for the time as set in the timeout line in the text (10 sec in the example). When issues occur, you might restart Windows and pick the first option. As an alternative, you can delete the first startup option which will make the startup go without offering choices. That’s up to you. When issues occur, you have to re-edit the startup file before restarting, deleting the /userva and /3gb switches.

Again, non-LAA programs will not be effected by any of those adjustments to your Windows environment. Changing either the startup commandline or the boot.ini file back to their original state sets you back to the original, nonmodified situation. In which case even LAA-programs will obey the 2Gb limit again.

Next >

Breaking the 2Gb Barrier? (5 Memory)

Physical and Virtual Memory

In the previous sections, it was discussed how individual programs could use 2, 3 or 4Gb memory. And when you open the Taskmanager, you can see 50 or more programs running at the same time.

So, how much RAM can one have installed to make all things work?

The answer is that 2Gb is a minimum requirement, 4Gb is an absolute maximum for 32-bit Windows anyway (it’s just a non-LAA program itself!) and the 3Gb as in most laptops gives a good price/performance ratio, unless you’re doing the bulky things that made you read this tutorial in the first place.

So, how does it work. I won’t go into details, but every time a program needs memory, it just gets a chunk (or: page) of it assigned by the Windows Memory Manager (WMM). This WMM can move filled but hardly used pages to disk, and back. Effectively, it uses diskspace to fill in enhanced requirements, and this is why you don’t need to have all RAM aboard physically.

The downside of this approach is that when your program needs a lot of RAM actively, the WMM will get very busy by swapping all those memory pages on and off the disk. You will see and hear the rattling of your disks, and you will experience serious performance downgrades. More physical RAM just means: less swapping, and better performances (as long as you have this kind of issue).

For all addressing, programs talk to the WMM. So whether they request up to 2Gb or even up to 3Gb, it might come from either RAM or disk. Actually, adding more physical RAM will not solve any problems coming from crossing the 2Gb User Memory border. On the other hand, when so much memory is actively used it’s likely that you might run into swapping delays as well. Although in my experience those RAM-hungry programs are quite handy in avoiding it.

So, enough physical RAM for all your simultaneous programs, and enough user memory for one single program, really are separate things. Solving one will not solve the other.