Jump to content
The Dark Mod Forums

rich_is_bored

Member
  • Posts

    885
  • Joined

  • Last visited

  • Days Won

    18

Posts posted by rich_is_bored

  1. I'm not talking about how the texture is mapped. I'm talking about how the texture is filtered. To clarify...

     

     

    Here we have Quake 2 running in software mode. No texture filtering takes place. The pixels in each texture are easily identifiable. Now jump ahead to 3m59s. Here it's running in hardware mode. The textures are blurry. The color values are not simply sampled from the nearest pixel within the texture. They are interpolated.

  2. I get that being able to save and reload whenever you want negates the challenge of living with your mistakes but it's also the path of least resistance. The player can't reliably fight or outrun AI and there are a load of penalties associated with being spotted. Reloading a save is a more effective means of recovering than anything else in the player's arsenal.

     

    There's room for experimentation sure but I'm confident that the better solution here is to find ways to encourage players to save less rather than force them.

  3. I'm going to toss this idea out there knowing this crowd will probably shoot it down but here goes.

     

    This engine was at one time capable of recording demos although that functionality may be broken by this point. For those who aren't familiar, it's a recording in the same vein as a movie only we capture the game state over the course of a play through and it takes up much less space. If you had a large sample size of these demos for every FM, you'd have a wealth of information to draw from for all sorts of purposes.

     

    As a map author you'd be able to see how people respond to the challenges you present them. As a player you'd be able to gauge what missions are to your fancy. As a coder you'd be able to see exactly what someone did before the game crashed.

     

    To expand upon the voting bit and finding missions you would enjoy playing. Imagine the player can mark missions locally simply as "good" or "bad" and the game was capable of identifying missions who's demo files fall within a given threshold of similarity. A list of recommended missions would be trivial to produce and there's no global ratings to game or scores to fuss or argue about.

    • Like 2
  4. I don't believe it's possible to automate optimization. Somewhere in the pipeline you need a knowledgeable person to clean the object up. Does the finished object have good topology? Is it's polygon count reasonable? Would it benefit from a shadow mesh? How are it's UVs packed? Is the texture resolution reasonable? And if the object is especially large there are cases where splitting the object into pieces can be beneficial.

     

    This could be useful in some rare cases but only for those with modeling experience. Without it you have beautiful models and horrible performance.

  5. I'm hotlinking an image here so it may not show up but...

     

    http://read.pudn.com/downloads113/sourcecode/windows/other/471455/Normal%20mapping/test_normal_map__.jpg

     

    Assuming these are supposed to protrude from the wall, everything checks out. Imagine there is a green light coming from below, and a red light coming from the right.

     

    In most cases, it's the green channel that is incorrect. Fixing it is as simple as inverting it. Alternatively you can flip your image vertically before generating the normal map and then flipping it again after.

  6. Nonsense. Developers don't contribute to open source projects for the hell of it. They stand to gain something. Maybe they've tried mapping and don't like the tools. Maybe they want to play more new missions and see a better toolset as a means to an end. Maybe they're simply perfecting their craft or find it entertaining.

     

    If someone showed up tomorrow with the code to completely remedy DR's shifting planes problem, we'd all celebrate. If this same person had paid to have that code produced, why would it be a problem? We aren't all coders. If money can produce bug fixes and improvements it would be foolish to turn down these contributions.

     

    It's only a matter of time before such a scenario plays out. For example ...

     

    https://www.bountysource.com/trackers/1296875-darkradiant

  7. There was a part towards the end of your introductory video where you assumed you were safe and ran right into a guard only to cut the video. I'm left wondering if we missed out on what might have been one of the best moments in your playthrough. Next time let it play out. Try working around the problem. More "misadventures", less video editing.

     

    Edit: Scratch that. You did much less of that in the videos that follow.

  8. I wrote an ASE importer/exporter for Blender at one point. The most frustrating part of the process is having to play catchup nearly every time there is a new Blender release. I knew updates would likely break the plugin but I wasn't prepared for how often. Looking back on that situation I think if you want a workflow pipeline with some real longevity you'd need to export to a format that's natively supported. It's much easier to write a converter that takes say an OBJ and makes an ASE than to keep addressing issues with an app that's in constant flux.

     

    No discredit to Blender mind you. It's a fantastic program and it's wonderful that you don't have to wait years between releases. But if you can identify a natively supported animation format, it might be best to write an MD5 converter for it. Do the work once and be done with it.

  9. I've been tinkering with Landmark a fair amount lately. There isn't much of a game implemented ATM but it's fun to experiment with the building tools.

     

    The world is comprised of both voxels and props. Voxel density is about 3 times greater than Minecraft so player is about 6 voxels tall. Also voxel corners are not locked to the grid. You can push and pull them, albeit indirectly, so it's possible to make virtually any shape provided you work at the appropriate scale.

     

    If you don't want to pony up for admission into the beta you can still get a good sense of how things work via Blender. It turns out Blender's remesh modifier uses the same technique to "smooth voxels".

  10. I saw an post on a forum somewhere, maybe it was polycount? At any rate, the idea was to exploit mipmaps by manually editing a DDS so that the downsized versions of a texture were not only smaller, but completely different images. The result is that what you see varies depending on distance and that sounds like exactly what you want here. The bonus is there is no overhead.

     

    The only problem is I'm not very familiar with DDS. Do they have an alpha channel? If so it should be applicable and fairly easy to setup. You'd make a black DDS image and alter the alpha channel of each downsized version so that the transparency decreases along with the resolution. Slap that texture on a patch. Stick it in front of your visportal. Then tweak the alpha values until the range is correct.

  11. It would be better if all the sounds were packaged in a PK4 and the archive were given a name that ensures it's loaded last. The engine loads PK4s in alphanumerical order.

×
×
  • Create New...