Jump to content
The Dark Mod Forums

Important Gaming News


Maximius

Recommended Posts

Heard about them before, looks kinda spiffy, too bad the writer can't spell or background check worth beans. "Gabe Novell" indeed.

 

 

One question though, once the D3 source has been released years from now, will we be able to plug-in support for such cards into the engine?

http://www.thirdfilms. com

A Thief's Path trailer is now on Youtube!

Link to comment
Share on other sites

The article said that the PPU may be an add on card or even an external device that could be added to an existing system, provided the proper level of computing power is available, but that production of the devices will probably wait until the software exists to utilize them. Which indicates to me that such plug in support for the D3 engine may not be an option. But I know next to nothing of these matters.

Link to comment
Share on other sites

One question though, once the D3 source has been released years from now, will we be able to plug-in support for such cards into the engine?

 

The article says the PPU is based on the Novalogic physics engine API. Novalogic has been ported to D3, which would mean that these cards should work with D3. At least this would be my assumption.

Gerhard

Link to comment
Share on other sites

The article says the PPU is based on the Novalogic physics engine API. Novalogic has been ported to D3, which would mean that these cards should work with D3. At least this would be my assumption.

 

Its the NovodeX physics sdk api. Currently no other physics engine is supported,they could be but Havok etal would have to pay for this. Havok is working on duel core tech rather than ppu.

 

The reson for this is price. It is uncertain at this ppoint how many gamers are going to pay $300 for a ppu

 

 

Lots of info here

http://personal.inet.fi/atk/kjh2348fs/ageia_physx.html

Eval/Demo Board Info

 

- Manufactured by TSMC (130nm process)

- 125 million transistors

- 182 mm² die size

- 28 watts total power consumption

- 128MB GDDR3

- PCI only (PCIe cards expected further in the future)

- ASUS first board manufacturer

- May be integrated in graphics cards in the future

- Only 1 model at launch

- The PCI and PCIe cards will be separate

- Samples in Q3 2005

- Expected to become available in Q4 2005 (December)

- Price roughly $249 to $299

- NovodeX physics SDK/API (multi-threaded, PhysX native)

- No use yet for anything other than gaming (may add in future)

- The cards are bundled with a Unreal Engine 3 tech demo (up to the board provider)

 

The bigest problem that these cards face after price is that no graphiucs card at present can draw 50000 physics objects. Push 10000 particels and it impacts framerate

Link to comment
Share on other sites

IMO in the long run it will get cheaper and GFX cards will become faster. With dual cards now you can already increase the scene complexity, which takes some time to process, and if the PPU will take away some time to free up the CPU it will definitely help. And I bet there will be people who buy this thing. If software actually support this, I guess I would buy this myself. :)

Gerhard

Link to comment
Share on other sites

I agree but I would be surprised if these take off. The cost is just to high at a time when cheaper comsoles are being released. Yes it costs no more than a midrange gpu, but a gpu is needed for games a ppu isnt. If its launch doesnt go well other makers arent going to take it up and the ppu could die a death.

Cut $100 off it and it would sell like hot cakes but as it is I would be surprised.

 

It launched at the end of the year, in he same timeframe as the x360 $300 for x360 $300 for a ppu?

 

The company is fabless so if the first run isnt a success they wont die but I cant see that many people spending that much on one of these when the game will still run fine without it, just having fewer physics objects.

 

edit: still I would like one :)

 

some new vids

http://www.airtightgames.com/currentproject.html

Link to comment
Share on other sites

LOL! This is cool!! Did you guys see this? Check this out:

 

A lot of little interactive physics demo pieces. It was linked from one of those PPU sites:

http://www.novodex.com/rocket/NovodexRocket_V1_1.exe - NovodexRocket_V1_1.exe - Interactive physics demo from September 2004, 15.6MB

 

I'm watching a horse run around in a circle right now, crashing through crates. When I hit the 'O' button it does a ragdoll slam into them! This is hilarious!!!

Link to comment
Share on other sites

I agree but I would be surprised if these take off. The cost is just to high at a time when cheaper comsoles are being released. Yes it costs no more than a midrange gpu, but a gpu is needed for games a ppu isnt. If its launch doesnt go well other makers arent going to take it up and the ppu could die a death.

 

I think it will take off, because PCs now have to compete with consoles that have three very powerful dual threaded CPUs plus a GPU, and they will be running physics that is beyond anything that current PCs can do with just one processor and a GPU. I would expect the price to come down dramatically once it takes off though, and I wouldn't be surprised if nVidia, ATI and Intel get in on the act, and make their own PPUs, and I'm sure Microsoft will bring out Direct Physics for the next iteration of Direct X. It will also be interesting to see how long before the Power PC chips in the X-Box and the PS3 make their way into PCs.

 

PCs need PPUs to catch up to consoles again, and to play games ported from the new consoles, so bring 'em on :)

Link to comment
Share on other sites

Personally I'd rather see more dual CPU's in PC's than a dedicated PPU. I guess as someone who sometimes codes physical simulations, I'm kind've wary of embedding one engine in the hardware and having to live with that for the hardware lifetime, despite how physics engines change in the future (maybe you can update it a little with firmware upgrades, but it's still mainly a chip designed to run one engine).

 

Finite differences method is great and all, but who knows, someone could come up with some awesome software that lets a physics engine do things we've never dreamed of before. I'd rather have dual or more CPUs that could incorporate the new engine than a chip designed around a single engine.

Link to comment
Share on other sites

Yeah, I think you are right - even GPUs are getting so generalised and programmable that you might as well just have computers with 4 or so multicore CPUS than bother with separate hardware for everything. Would certainly be more flexible than a separate PCI card, and probably cheaper, too.

 

EDIT: I think the way to go is to take a very modular approach, so you can plug different physics engines (that could use different hardware) that all use a common API - rather like the way differernt makes and models of graphics cards use OpenGL or Direct X. The modular appraoch I guess has some drawbacks of its own, but it allows for more flexibility in terms of system requirements.

 

So hopefully a standard API, let's call it Direct Physics or OpenPL for the sake of the discussion, can be implemented. you could make this work in a game by having a standardised set of physical properties to model (mass, friction, gravity etc), and this info is passed to the physics API, which interfaces with any hardware available, whether it be a separate CPU or a PPU, that is compliant with the standard.

 

Does anyone think I am barking up the wrong tree here? Or is it better to just have a set of generic parallel CPUs and let programmers implement thing in their own way entirely, and dispense with standardised APIs?

Edited by obscurus
Link to comment
Share on other sites

Standardised APIS are the way to go. Just look at other industrial branches. If you would have to go to a particular shop just to get the correct screw you would be, well, screwed. :) As it is, because of standardisation, you can go to any shop and tell them which one you need and you will know it works. Same should go for software. The problem with this is though, that hardware s not properly utilized, because standardized APIs will take away performance.

Gerhard

Link to comment
Share on other sites

I hear you, baby!!

 

The proceeding news story was on yahoo just last year, October. It's probably old hat to most of you.

 

HERZLIYA, Israel (Reuters) - An Israeli start-up has developed a processor that uses optics instead of silicon, enabling it to compute at the speed of light, the company said.....

 

...The processor performs 8 trillion operations per second, equivalent to a super-computer and 1,000 times faster than standard processors, with 256 lasers performing computations at light speed....

 

...The company's prototype is fairly large and bulky but when Lenslet begins to supply the processor in a few months it will be shrunk to 15 x 15 cm with a height of 1.7 cm, roughly the size of a Palm Pilot.

 

"In five years we plan to shrink it to a single chip," project manager Asaf Schlezinger said...

 

..."It's conceivable this technology could become mainstream inside chips in 10 years time," Tully said.

 

 

I have always, and forever shall, believe that this is the next step in cpu's. Early on, the main problem with this tech was the size (obviously), but more importantly (and they don't really go into this in the article) was the quality of light, believe it or not.

 

Hurray for photons and polaritons!!!!

 

Hylix.

 

Edit: Small clarification.

Edited by Hylix Ulyx
Link to comment
Share on other sites

I imagine an optics-based computer would be horrendously expensive for the reason you mention - quality of light. My brother is a physicist who works with quantum optics, and while I haven't got a clue when it comes to that sort of thing, he has told me how expensive the crystals are that are needed for high quality lasers, and they don't come cheap (he is always having trouble convincing his superiors that he needs a new $30,000 crystal for some new experimental laser) - 256 high quality lasers suitable for an optical computer would make for a PC that costs more than a couple of houses until they can get some kind of mass production rolling along. Maybe you don't need as high quality lasers as that, but I don't think there will be affordable optic or quantum desktop computers anytime soon... give it ten years maybe. Would be nice to have though :)

 

I think PPUs and GPUs are the best choice for the current state of technology, but as technology grows, it will be better to just have one really powerful CPU.

 

I think single CPU systems have reached their zenith - the industry is moving towards parallelism, with multicore CPUs, and multi CPU systems becoming mroe mainstream. We have nearly reached the physical limits of what you can do with a single CPU chip - we are already seeing diminishing returns on processor power versus clock speed. A Pentium 4 running at 4GHz is not twice as powerrful as a P4 running at 2GHz, I am not sure what the actual figure is, but I think it equates to about 30% more performance. And I read somewhere that if you theoretically overclock a 3.4 GHzP4 to 4.5 GHz, it would only translate to a 2% increase in performance - hardly worth the effort considering the cooling problems that would cause (dunno how true that is though). And a 4GHz processor consumes a ridiculous amount of power, and pumps out a colossal amount of heat. It is better to have a bunch of slower CPUs working in parallel, as you can get a lot more computing power for your buck that way. And that seems to be what CPU manufacturers are doing - all are now making Dual core CPUs with less emphasis on clock speed...

Link to comment
Share on other sites

But even with light based computers, the speed (in terms of clock cycles) will always be limited to the size of the structures that make up the computer. You can only make logic gates so small, whether optical or electronic or otherwise, and the speed of the processor is directly related to the size of the components. When you have reached the practical limits of raw clock cycle speed, the only other option is to use parallelism to increase performance. Having a processor dedicated to graphics, one dedicated to physics, another to sound, another to AI maybe, another to tying it all together will run a much more detailed game than one processor trying to do it all, and it will be easier to create programs because you can approach it in a much more modular, compartmentalised way.

 

Interestingly, the new Creative X-Fi sound chip has more transistors than an AMD 64 FX and a P4 combined (not clocked as fast though), so it won't be long before very powerful APUs (audio processing units) hit the market, taking a big load off the CPU in processing complex 3D raytraced sound etc.

 

Having a single processor doing it all is not going to be the best option for most people - why spend big bucks on one mofo of a CPU when you can get a bunch of lesser chips for the same price that will do what you need in parallel and faster at that? (unless space is an issue).

 

One man digging a hole will get the hole dug a lot faster if he has a few friends to help him, as he is limited by how fast he can dig (though it obviously depends on the size of the hole - too many people working on something small and simple is not efficient at all). Regardless of improvements in computing technology, it won't change the fundamental issue of parallelism vs serialism, and the fact that when it comes to crunching big numbers, serial data processing will always be limited by the laws of physics and engineering technology, while parallel computing will only be limited by how many processors you can get workling on the problem. You can build a render farm out of old 486 CPUs that will be considerably faster at rendering than a single AMD 64 or 3.6 GHz P4 doing the same thing, and it can be done much cheaper, since people practically throw old 486s away.

 

Of course, Quantum computers could blow all of that away - one day we should be able to have enough processing power to run a holodeck out of a computer the size of a coffee cup.

Link to comment
Share on other sites

Some random comments:

 

Overall, I think the first place we'll see optics in computers is with optical interconnects between separate chips. This is a fairly simple setup with an array of lasers/detectors aligned (could even be in free space, but you probably need some packaging to keep dust and stuff out of the beams), and people have been working on it for a while. For example, parallel processors could be placed right above / below eachother and connected with an array of lasers and detectors integrated on each processor.

 

256 high quality lasers suitable for an optical computer would make for a PC that costs more than a couple of houses until they can get some kind of mass production rolling along.

 

It is true that the material you need for good optics (ie, low defect density so you can lase at low currents) is relatively expensive right now. The materials used in integrated optics (GaAs and InP) are not as mature as Si, so people are still working out inexpensive ways to grow them with low defect density.

 

The other key though is integration. You shouldn't need 256 crystals for 256 lasers in your CPU. Like in silicon, you can put 256 lasers on a single GaAs or InP wafer. Integrated optics is still nowhere near the component density of integrated electronics, but that's changing as people develop more compact components that can do stuff over a very small length scale.

 

Making smaller components, you come up against stuff like loss due to bending light at sharp angles, and how each finite pulse of light has some distribution around the central wavelength (due to time and frequency "uncertainty relationship", altho actually it's just a fourier transform pair relationship), so even TIR mirrors will not reflect all of the signal.

 

 

But even with light based computers, the speed (in terms of clock cycles) will always be limited to the size of the structures that make up the computer. You can only make logic gates so small, whether optical or electronic or otherwise, and the speed of the processor is directly related to the size of the components.

 

With optics, you can send multiple wavelengths down the same channel (WDM), so you can send a lot more data in parallel thru one circuit. That is one major advantage in terms of more processing power.

 

When people talk about ALL optical computing and optical logic gates, I think one of the main limitations, aside from component size, is power consumption. It's a lot faster, but switching light with light (using nonlinear optics) still requires a lot of power in the switching beam compared to the small amount of current it takes to switch a transistor. All optical computing is still pretty far in the future.

 

Oh well, just some random thoughts, now I have to stop procrastinating and get back to actually working on integrated optics :)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recent Status Updates

    • Petike the Taffer

      I've finally managed to log in to The Dark Mod Wiki. I'm back in the saddle and before the holidays start in full, I'll be adding a few new FM articles and doing other updates. Written in Stone is already done.
      · 1 reply
    • nbohr1more

      TDM 15th Anniversary Contest is now active! Please declare your participation: https://forums.thedarkmod.com/index.php?/topic/22413-the-dark-mod-15th-anniversary-contest-entry-thread/
       
      · 0 replies
    • JackFarmer

      @TheUnbeholden
      You cannot receive PMs. Could you please be so kind and check your mailbox if it is full (or maybe you switched off the function)?
      · 1 reply
    • OrbWeaver

      I like the new frob highlight but it would nice if it was less "flickery" while moving over objects (especially barred metal doors).
      · 4 replies
    • nbohr1more

      Please vote in the 15th Anniversary Contest Theme Poll
       
      · 0 replies
×
×
  • Create New...