Jump to content
The Dark Mod Forums
Sign in to follow this  
Springheel

Converting model .tgas to .dds

Recommended Posts

I really don't think it's a matter of the card, it's the 1.4GHz processor. The card is a GF6800 - older, but no slouch by any means, and even higher than a few other users' cards. But when the difference in launch time (D3 to main menu only, or DR with no map) between my 1.4GHz CPU and my 3.0GHz CPU is on the order of 3-4x longer at least, that's the key factor. The card is probably sitting there waiting to be handed the next set of textures. And once they're all cached, and as long as they don't go beyond some magic number in size, it runs without issue. The loading is slow, the swapping is slow.

 

Uhm, but the load time would be IO bound, not CPU bound. Even a 1Ghz CPU would be twiddling it's thumbs loading a few hundred megs from HD. My CPU scales down to 1Ghz and is uses maybe like 10% when loading stuff in D3.

 

Sneaksie, could you please post the system specs I asked about in a new thread? I'd like to see why your system is so slow.


"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man." -- George Bernard Shaw (1856 - 1950)

 

"Remember: If the game lets you do it, it's not cheating." -- Xarax

Share this post


Link to post
Share on other sites
Or at least, erring on the side of making everything DXT1 and then finding the broken ones which need fixing to DXT3.

 

Are we sure DXT1 works? I thought we went through a similar discussion back when we converted the brush textures, and decided on DXT3. I don't remember the reason but I presume it was a good one.

Share this post


Link to post
Share on other sites

Because it's old. :) My OS harddrive (where the swapfile is) is from ~2000. Whatever those old specs were for speed, that's what it is. Whatever the RAM around that time is, that's what it is. All that, with a GF6800.

 

I'm not sure why you're thinking it should be so fast. It's approaching a decade old. A 1.4GHz CPU. It takes probably 15 seconds for Photoshop just to launch. Around a minute for D3 to launch to menu (I'm not on it right now to check). Nothing to do with map textures (okay very little). A map the size of bonehoard or blackheart or saintlucia takes near 10 mins to launch. Before it was fixed in DR, simply inverting selection on a map (another action which has nothing to do with texture memory) as big as bonehoard took several minutes of face-peeling agony. I can barely stream today's videos on the net, because the CPU demand is too high for the compressions used. It's the CPU, man. The CPU! Can't get blood out of a stone. :laugh:

Share this post


Link to post
Share on other sites
I thought we'd already established that we can't compress the normalmaps, or else they won't work with addnormals passes?

 

I wasn't sure if that had been confirmed absolutely. I'm wondering if there are any examples where they used an addnormal pass in D3 where we could test it? Just odd to me that they would do things that way, and some of the normal maps are pretty huge.

 

The thing is, when we're setting image_useNormalCompression 2, the engine is creating the rxgb compressed normals at load time, and everything works fine when the engine makes them. I'm wondering what could be going wrong when we make them?

 

I think we've been setting useNormalCompression erroneously, the only setting we should need is image_useallformats and then the game will see both dds and tga.

 

I'll look into some things.

Share this post


Link to post
Share on other sites
Are we sure DXT1 works? I thought we went through a similar discussion back when we converted the brush textures, and decided on DXT3. I don't remember the reason but I presume it was a good one.

It had to be about alphas... every texture I've ever submitted (unless it had alpha... can't think of one right now) uses the same command line with the nvidia tools:

 

nvdxt.exe -file filename.tga -dxt1c

Share this post


Link to post
Share on other sites

It'd be crazy to do them all as dxt3. Most of the textures don't have an alpha channel, so you'd be doubling the size for nothing.

As for normal maps, if they have to be left as targas, then all the 1024's should be reduced to 512 ,adn the 512's to 256. The difference will not be noticed.


Civillisation will not attain perfection until the last stone, from the last church, falls on the last priest.

- Emil Zola

 

character models site

Share this post


Link to post
Share on other sites
Because it's old. :) My OS harddrive (where the swapfile is) is from ~2000. Whatever those old specs were for speed, that's what it is. Whatever the RAM around that time is, that's what it is. All that, with a GF6800.

 

I'm not sure why you're thinking it should be so fast. It's approaching a decade old. A 1.4GHz CPU. It takes probably 15 seconds for Photoshop just to launch. Around a minute for D3 to launch to menu (I'm not on it right now to check). Nothing to do with map textures (okay very little). A map the size of bonehoard or blackheart or saintlucia takes near 10 mins to launch. Before it was fixed in DR, simply inverting selection on a map (another action which has nothing to do with texture memory) as big as bonehoard took several minutes of face-peeling agony. I can barely stream today's videos on the net, because the CPU demand is too high for the compressions used. It's the CPU, man. The CPU! Can't get blood out of a stone. :laugh:

 

The point is that my CPU runs at 1Ghz most of the time (downscaled from 2.6Ghz) and is barely taxed (e.g. system load is 10%). So I wonder why my modern CPU uses .1Ghz while yours uses 1Ghz, and yet my system is about 100 times faster.

 

There is something very very wrong with your system and it is not just age. :)

 

And I'd like to find out what real bottleneck is. Either this, or we really can't expect anyone with such a system as yours to play TDM, and in this case, I wonder why we take your system as reference :)

 

Yes, we should optimize things. But if we optimize them that they are _playable_ on your system, it will look absolutely crap at any modern system (e.g. anything that is not yet 5 years old). And I think that sacrifice is too much. :D

 

Edit: Can you please post the output of dxinfo somewhere? or email it to me at nospam-abuse@bloodgate.com. Thanx!


"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man." -- George Bernard Shaw (1856 - 1950)

 

"Remember: If the game lets you do it, it's not cheating." -- Xarax

Share this post


Link to post
Share on other sites
Edit: Can you please post the output of dxinfo somewhere? or email it to me at nospam-abuse@bloodgate.com. Thanx!

 

 

Are you talking about DXDIAG?


I always assumed I'd taste like boot leather.

 

Share this post


Link to post
Share on other sites
I wasn't sure if that had been confirmed absolutely. I'm wondering if there are any examples where they used an addnormal pass in D3 where we could test it? Just odd to me that they would do things that way, and some of the normal maps are pretty huge.

 

Doom3 has an entire subfolder structure under dds called "addnormals", with both model and texture subfolders. The files inside seem to be .dds files though.

 

Other normalmaps (presumably ones not used with addnormals) are included in the regular .dds folder structure.

 

I wonder if the ones in the addnormals folders have been saved with a special kind of compression that allows them to work? Atm I'm getting error messages whenever I try to open .dds files, so I can't check.

Share this post


Link to post
Share on other sites
Are you talking about DXDIAG?

 

Uh might be, I don't have windows. Whatever that little utility is called that collects all the hardware/software info and compiles it into a text report.

 

Especially important seem to be to be:

 

* RAM size (excessive swapping would make the system slower by a factor of 10 and could be one cause)

* VRAM size (again, excessive swapping to VRAM make texture access slower)

* HD specs (esp. if swapping occurs, the HD will slow everything down)

 

I guess we already know it is an AGB system with a 1.4Ghz CPU. But what CPU exactly? :)


"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man." -- George Bernard Shaw (1856 - 1950)

 

"Remember: If the game lets you do it, it's not cheating." -- Xarax

Share this post


Link to post
Share on other sites
As for normal maps, if they have to be left as targas, then all the 1024's should be reduced to 512 ,adn the 512's to 256. The difference will not be noticed.

While that's a little scarier, it's probably true, and would save a ton of storage space. Most of the textures aren't crisp and clear anyway.

 

So I wonder why my modern CPU uses .1Ghz while yours uses 1Ghz, and yet my system is about 100 times faster.

Is that Windows? [Edit: nope, I see from above. It is a known fact that Windows is bloated and runs far slower than say Linux]. What service pack? I only run SP1 (with reason). My DX is 9.0c and nvidia is 93.71 (this was for a compatibility reason I can't recall at the moment). Your video card and driver revisions? What about the basic architecture, the fact that a 2.8GHz proc is not just 2X faster than a 1.4 but actually much faster because of the core design, die size, or whatever-the-hell? There are also bus speeds and ram speeds and a bunch of other crap I know very little about, nothing more than the fact that they're 8 years old now(!). I get good enough play performance*, but when you try to cram 360Mb of textures into 128Mb of memory, something's got to go somewhere, so it goes to my slow ass memory and my slower ass HD (it is NOT modern day speed as should be known by now from previous posts). My system has almost nothing custom installed, in fact if anything was "very very wrong" it'd be that it's got old stuff as opposed to any screwed up config from new incompatibilities. Others have spoke in other threads (NH, Fidcal, IIRC) where it came about that we get similar performance, and in fact I participated in a thread where I was being asked how I get such great performance with lesser specs! I run clean. Not sure why I'm being put under a microscope.

 

*Also if something is "very very wrong" it was since day one, because old games still run the same as they have for 8 years. In fact, they're a bit faster (stuff like RBR or Morrowind) because they can now store all textures they need between scenes, courtesy of the GF6800. In other words, there's been no catastrophic change in behavior. Doom3 and Resurrection of Evil played through fine (though RoE pretty much sucked), just with long load times. Those games were professionally optimized and balanced to allow systems like mine to qualify. TDM is not (yet, or well). My point has only been if we want to include the full family of P4 processors, it needs to be.

 

Time of this report: 5/24/2008, 18:00:35
  Operating System: Windows XP Professional (5.1, Build 2600) Service Pack 1 (2600.xpsp2.040919-1003)
	   Language: English (Regional Setting: English)
System Manufacturer: Dell Computer Corporation
   System Model: Dimension 8100			   
		   BIOS: Phoenix ROM BIOS PLUS Version 1.10 A06
	  Processor: Intel® Pentium® 4 CPU 1400MHz, ~1.4GHz
		 Memory: 1024MB RAM
	  Page File: 172MB used, 2288MB available

(Note:  it's currently set to allow windows to manage the swap, and it never goes above 1.5G;  I can load fairly large maps (bonehoard a while ago) with it all within RAM, I think (before it starts paging to drive))

	Windows Dir: C:\WINDOWS
DirectX Version: DirectX 9.0c (4.09.0000.0904)
DX Setup Parameters: Not found
 DxDiag Version: 5.03.0001.0904 32bit Unicode

---
 DirectX Files Tab: No problems found.
  Display Tab 1: No problems found.
	Sound Tab 1: No problems found.
	  Music Tab: No problems found.
	  Input Tab: No problems found.
	Network Tab: No problems found.
---
	Card name: NVIDIA GeForce 6800
 Manufacturer: NVIDIA
	Chip type: GeForce 6800
	 DAC type: Integrated RAMDAC
   Device Key: Enum\PCI\VEN_10DE&DEV_0041&SUBSYS_A3433842&REV_A1
  Display Memory: 128.0 MB
 Current Mode: 1024 x 768 (32 bit) (76Hz)
	  Monitor: NEC MultiSync XE21
 Monitor Max Res: 1280,1024
  Driver Name: nv4_disp.dll
  Driver Version: 6.14.0010.9371 (English)
  DDI Version: 9 (or higher)
Driver Attributes: Final Retail
Driver Date/Size: 10/22/2006 12:22:00, 4527488 bytes
  WHQL Logo'd: Yes
 WHQL Date Stamp: n/a
		  VDD: n/a
	 Mini VDD: nv4_mini.sys
Mini VDD Date: 10/22/2006 12:22:00, 3994624 bytes
Device Identifier: {D7B71E3E-4301-11CF-697F-498300C2CB35}
	Vendor ID: 0x10DE
	Device ID: 0x0041
	SubSys ID: 0xA3433842
  Revision ID: 0x00A1
  Revision ID: 0x00A1
  Video Accel: ModeMPEG2_A ModeMPEG2_B ModeMPEG2_C ModeMPEG2_D 
Deinterlace Caps: {212DC724-3235-44A4-BD29-E1652BBCC71C}: Format(In/Out)=(YUY2,YUY2) Frames(Prev/Fwd/Back)=(0,0,0) Caps=VideoProcess_YUV2RGB VideoProcess_StretchX VideoProcess_StretchY DeinterlaceTech_PixelAdaptive 
			   {335AA36E-7884-43A4-9C91-7F87FAF3E37E}: Format(In/Out)=(YUY2,YUY2) Frames(Prev/Fwd/Back)=(0,0,0) Caps=VideoProcess_YUV2RGB VideoProcess_StretchX VideoProcess_StretchY DeinterlaceTech_BOBVerticalStretch 
			   {212DC724-3235-44A4-BD29-E1652BBCC71C}: Format(In/Out)=(UYVY,YUY2) Frames(Prev/Fwd/Back)=(0,0,0) Caps=VideoProcess_YUV2RGB VideoProcess_StretchX VideoProcess_StretchY DeinterlaceTech_PixelAdaptive 
			   {335AA36E-7884-43A4-9C91-7F87FAF3E37E}: Format(In/Out)=(UYVY,YUY2) Frames(Prev/Fwd/Back)=(0,0,0) Caps=VideoProcess_YUV2RGB VideoProcess_StretchX VideoProcess_StretchY DeinterlaceTech_BOBVerticalStretch 
			   {212DC724-3235-44A4-BD29-E1652BBCC71C}: Format(In/Out)=(YV12,0x3231564e) Frames(Prev/Fwd/Back)=(0,0,0) Caps=VideoProcess_YUV2RGB VideoProcess_StretchX VideoProcess_StretchY DeinterlaceTech_PixelAdaptive 
			   {335AA36E-7884-43A4-9C91-7F87FAF3E37E}: Format(In/Out)=(YV12,0x3231564e) Frames(Prev/Fwd/Back)=(0,0,0) Caps=VideoProcess_YUV2RGB VideoProcess_StretchX VideoProcess_StretchY DeinterlaceTech_BOBVerticalStretch 
			   {212DC724-3235-44A4-BD29-E1652BBCC71C}: Format(In/Out)=(NV12,0x3231564e) Frames(Prev/Fwd/Back)=(0,0,0) Caps=VideoProcess_YUV2RGB VideoProcess_StretchX VideoProcess_StretchY DeinterlaceTech_PixelAdaptive 
			   {335AA36E-7884-43A4-9C91-7F87FAF3E37E}: Format(In/Out)=(NV12,0x3231564e) Frames(Prev/Fwd/Back)=(0,0,0) Caps=VideoProcess_YUV2RGB VideoProcess_StretchX VideoProcess_StretchY DeinterlaceTech_BOBVerticalStretch 
	 Registry: OK
 DDraw Status: Enabled
   D3D Status: Enabled
   AGP Status: Enabled

Predicted conclusion: "update your drivers" (which never does anything except introduce new compatibilities, thus my current versions). I did go up to the 98.whatever-it-was's, but it caused problems and I had to roll back. Not performance increase was noted (not that that would come into play here, which is apparently about massive textures loading from pagefile).

 

Edit: Sorry to any reading this, I didn't know how to put the above into a scrolling box.

 

Edit: Also (HD not fragmented; image attached)

 

Re-Edit: I do now. :P

post-58-1211670797_thumb.jpg

Share this post


Link to post
Share on other sites
Uh might be, I don't have windows. Whatever that little utility is called that collects all the hardware/software info and compiles it into a text report.

 

Especially important seem to be to be:

 

* RAM size (excessive swapping would make the system slower by a factor of 10 and could be one cause)

* VRAM size (again, excessive swapping to VRAM make texture access slower)

* HD specs (esp. if swapping occurs, the HD will slow everything down)

 

I guess we already know it is an AGB system with a 1.4Ghz CPU. But what CPU exactly? :)

 

Yes it's DXDIAG, click on "Start->Run..." then type in "dxdiag" (without quotes) and press 'Enter'.

When it's done searching for drivers, their will be a button on the main page that says "Save All Information" and that will save that info to a text document. You can figure out what to do from there.

 

BTW it's AGP (Accelerated Graphics Port) not AGB (never heard of that acronym before)

 

EDIT: Nevermind, guess you already knew how to use it, and lol 98 series? I'm using 175 series.

Here is the download page: http://www.nvidia.com/object/winxp_175.16_whql.html


I always assumed I'd taste like boot leather.

 

Share this post


Link to post
Share on other sites

One thing that will really impact performance when reading from discs is HardDisk buffer size (in MB) and spindle speed. Newer drives can get away with running at 5400 instead of 7200rpm because their areal density is so much higher (bits are packed closer together so disc needs to spin less to get same information) so that compensates for the slower spindle speed, but yours is old so I'm thinking 4200 rpm with 2 MB buffer (around there) which means really slow read/write speeds even though processor isn't that bad. Also even though you have an older Mother Board, I'm sure you are sunning ATA ( I doubt the bus is ATA100 probably slower) so this could also affect speeds.

 

A good upgrade is a new system. Even a 200$ base system from say Dell would be at least 3 times faster.

For example this is on ebay right now for ~250$ http://cgi.ebay.com/New-Dell-530S-PC-Deskt...1QQcmdZViewItem


I always assumed I'd taste like boot leather.

 

Share this post


Link to post
Share on other sites

Agreed, a good 512 normal is just as good as a 1024 if the source image is high quality....although if this addnormals folder that Springheel is talking about can eliminate the previous issue, this might allow us to keep the 1024 normals...although even those are a bit beefy at 1meg.

Share this post


Link to post
Share on other sites
What we will probably do is move the .tga versions to the highres repository.

Quoted for emphasis. This is a rather important step. :)

 

Unless of course the textures already exist in darkmod_hires, in which case we'd have to assume that they're already better than the ones in the main repository.

 

Does the hires repository still have the same folder structure as the main texture repository? I imagine the correspondence could have been broken quite easily if people were moving textures around, particularly during reorgs. This is something to be careful of, since we could easily end up with duplicates in hires if it's all completely automated.


My games | Public Service Announcement: TDM is not set in the Thief universe. The city in which it takes place is not the City from Thief. The player character is not called Garrett. Any person who contradicts these facts will be subjected to disapproving stares.

Share this post


Link to post
Share on other sites

I personally don't want to start shrinking our normalmaps. We don't have many that are 1024 anyway, and those are usually for fairly large objects, like characters or large furniture, where I think a size change would be noticeable. If we can use the same compression that they did for D3, however, all the better.

 

We may also want to convert our menu images to .dds files, as many of them are quite large, and that would probably save on load times.

Share this post


Link to post
Share on other sites

You Will NOT notice smaller normal maps. The size of them is insignificant compared to the actual texture.


Civillisation will not attain perfection until the last stone, from the last church, falls on the last priest.

- Emil Zola

 

character models site

Share this post


Link to post
Share on other sites
You Will NOT notice smaller normal maps. The size of them is insignificant compared to the actual texture.

 

Do you mean that shrinking the normal maps will not much reduce the size on disk, or nt much reduce their visible quality in game?

 

I don't have stats yet, but I think normal maps are much smaller than the diffusemaps we have.

 

@Springheel: Agree about the menu TGAs to DDS.

 

I will resume working on the scripts in a few minutes, spent most of the day so far fighting my Fritzbox (just to find out the USB port is probably broken :(


"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man." -- George Bernard Shaw (1856 - 1950)

 

"Remember: If the game lets you do it, it's not cheating." -- Xarax

Share this post


Link to post
Share on other sites
Is that Windows? [Edit: nope, I see from above. It is a known fact that Windows is bloated and runs far slower than say Linux]. What service pack? I only run SP1 (with reason). My DX is 9.0c and nvidia is 93.71 (this was for a compatibility reason I can't recall at the moment). Your video card and driver revisions? What about the basic architecture, the fact that a 2.8GHz proc is not just 2X faster than a 1.4 but actually much faster because of the core design, die size, or whatever-the-hell? There are also bus speeds and ram speeds and a bunch of other crap I know very little about, nothing more than the fact that they're 8 years old now(!).

I get good enough play performance*, but when you try to cram 360Mb of textures into 128Mb of memory, something's got to go somewhere, so it goes to my slow ass memory and my slower ass HD (it is NOT modern day speed as should be known by now from previous posts). My system has almost nothing custom installed, in fact if anything was "very very wrong" it'd be that it's got old stuff as opposed to any screwed up config from new incompatibilities. Others have spoke in other threads (NH, Fidcal, IIRC) where it came about that we get similar performance, and in fact I participated in a thread where I was being asked how I get such great performance with lesser specs! I run clean. Not sure why I'm being put under a microscope.

 

Chill :) I didn't want to "single you out", nor make you a problem. It is just I saw/remember you postining most about performance problems - and the times you reported are aweful!

 

If there is a 10 second stutter somewhere, the map is basically unplayable (in my definition).

 

Now why I am asking you and talking to you is, because we had quite some negative feedback for TD about performance. Back then, we said "It is the AI and the rain".

 

Now, for the next release, we have much better AI and no rain, and so we should have good performance. If we don't, we will get negative feedback again and it would be much better to sort that out internally before the masses flame us :D

 

E.g. we can:

 

* optimize stuff

* set a baseline (you need at least THIS to play this mission)

 

For both things, we need some timings/benchmarks/tests done on different systems, and also, find out what the bottlenecks actually are, so in case someone has the same problem we can:

 

* say "please upgrade, your hardware is simple too old"

* say "update this, or that, tweak this setting"

 

But in both cases we simple need to know more than a vague idea of "its old, thus it must be slow" :)

 

*Also if something is "very very wrong" it was since day one, because old games still run the same as they have for 8 years. In fact, they're a bit faster (stuff like RBR or Morrowind) because they can now store all textures they need between scenes, courtesy of the GF6800. In other words, there's been no catastrophic change in behavior. Doom3 and Resurrection of Evil played through fine (though RoE pretty much sucked), just with long load times. Those games were professionally optimized and balanced to allow systems like mine to qualify. TDM is not (yet, or well). My point has only been if we want to include the full family of P4 processors, it needs to be.

 

The problem we have with the current situation is that there is only vague ideas about how slow it actually should be (or is).

 

It really doesn't matter how fast other games run, because:

 

* game A uses X mbyte of memory, you have X+Y, so more, so it runs at full speed

* game B uses Z mbyte of memory, but you have only Z-U, so less, so it swaps itself to death

 

Now add that "game B" can be a TDM mission which tuesdays uses 300mbyte memory, and Wednesday suddenly 350 (or 250) and you see that you cannot compare these things, really.

 

Maybe we should have some sort of benchmark maps that don't get changed themselves. Like one that uses tons of geometry, another one tons of moveables, another one with tons of textures.

 

Then we can measure loadtimes etc.

 

Anyway, thanx for posting the info, will look into it later.

 

(PS: Why not SP2?)


"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man." -- George Bernard Shaw (1856 - 1950)

 

"Remember: If the game lets you do it, it's not cheating." -- Xarax

Share this post


Link to post
Share on other sites

It'll not reduce their visible quality. Games like RtCW used 256 and 128 textures and it still looks damn good. Normal maps don't suffer from size reduction as much as the detail in colour maps. The 1024's were never intended as final game resolution, it was just in case they had to be edited etc

Taking a quick look through the textures, there's a lot of 1024 normal maps that could even be knocked down to 256, considering the low frequency information they're carrying.


Civillisation will not attain perfection until the last stone, from the last church, falls on the last priest.

- Emil Zola

 

character models site

Share this post


Link to post
Share on other sites
It'll not reduce their visible quality. Games like RtCW used 256 and 128 textures and it still looks damn good. Normal maps don't suffer from size reduction as much as the detail in colour maps. The 1024's were never intended as final game resolution, it was just in case they had to be edited etc

Taking a quick look through the textures, there's a lot of 1024 normal maps that could even be knocked down to 256, considering the low frequency information they're carrying.

 

I disagree a bit - low-res normalmaps certainly are visible when you get close to textures with sharp edges - you can see the features get smeared and softened.

 

So before we start dropping the visual quality, lets first start converting TGAs to DDS - e.g get the low-hanging fruit :)


"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man." -- George Bernard Shaw (1856 - 1950)

 

"Remember: If the game lets you do it, it's not cheating." -- Xarax

Share this post


Link to post
Share on other sites

Well this is the point, the normal map size needs to be decided on a case by case basis, rather than being batch reduced to a certain resolution.


Civillisation will not attain perfection until the last stone, from the last church, falls on the last priest.

- Emil Zola

 

character models site

Share this post


Link to post
Share on other sites
So before we start dropping the visual quality, lets first start converting TGAs to DDS - e.g get the low-hanging fruit

 

Agreed.

Share this post


Link to post
Share on other sites
Now, for the next release, we have much better AI and no rain, and so we should have good performance. If we don't, we will get negative feedback again and it would be much better to sort that out internally before the masses flame us

Yep. Well, the only real performance issues I have at this time with saintlucia are:

1. memory reqs higher than 1Gb

2. something is wrong at the 'bend in the road', a portal flood, or some other unknown; it's not just about geometry

 

I did further testing last night and was able to determine/confirm some things.

 

Memory: my system loads the map quick and easy -- until the RAM is filled (1Gb). At that point, it starts to page to the hard drive, as expected. This of course is thousands of times slower than memory access.

 

Drive speed: I temporarily(?) switched my page file from the slower, older OS drive to the less slow, less old data drive. This sped things up a bit; it still froze on cache swaps of course, but instead of 10 seconds, it may have been 3-4 seconds, as an example.

 

So in my case, the combination of not enough RAM and a slow swap drive is the killer. Not the video card, because frankly this video card destroys my 3GHz machine in performance (it has that lousy onboard intel card).

 

For the mod in general: know that a map of this size (it's moderate, I'd say; this is not a 'large' Thief-type map), 1Gb RAM is below specification. That's a significant statement. I'd wager 1Gb is fairly normal. Few have lower, but not many have made the jump to 2Gb yet, especially among more modest (non Crytek ready) systems.

 

What can we do about it? That's where this texture discussion comes in. ;)

 

For me locally, I'm currently trying to determine the exact cvars to set so that I keep my current resolution, but perhaps bump my texture sizes down a notch (if anyone knows of such, help a guy out). For the mod, it might not be a bad idea to support lower res texturing better. That is, if we want fans with 1Gb RAM to have any chance of enjoying anything but small-moderate maps.

 

PS: Why not SP2?

To be honest, I'm not sure, but I remember it was something significant, possibly having to do with work.

 

 

Edit: Quite a few goodies in this one which I'm going to try out.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this  

×
×
  • Create New...