Jump to content
The Dark Mod Forums

Intel's New CPU


HappyCheeze

Recommended Posts

Nothing like getting the latest and greatest hardware, when suddenly, something new comes out.

 

Today, Intel has released their new i7 processors. They're quad cores.

 

http://www.tigerdirect.com/applications/ca....asp?CatId=4072

 

For starts, the i7 mobos replaces the FSB with intel's own Quickpath interface.

 

wiki: http://en.wikipedia.org/wiki/Intel_Core_3

|=-=------=-=|

happycheeze.deviantart.com

 

Moddb

 

Gamers Outreach, a nonprofit that uses videogames to raise money for chairty.

|=-=------=-=|

Link to comment
Share on other sites

I haven't followed this type of technology news for years (O, RLY? I'm playing TDM on a 1.4GHz proc), so I'm always left with the same wonderings:

 

* How much faster are these duo/quad/whatever cores? Charts which demonstrate?

* Are they just more processors stacked on the chip? And if so, does that really help much?

* Are we in fact then at some theoretical or practical limit, and not going to exceed 3.x GHz anytime soon?

* What the hell happened to the recent (maybe 6 months or so?) announcement (by Intel IIRC) where they said a "new breakthrough" allowed for processors thousands of times faster than today's?

Link to comment
Share on other sites

I guess you are talking about the newly found basic electronic component called "Memristor". So far processors mostly consist of Transistors, which basically are just bi-directional voltage controlled switches. The Memristor also functions like that with the addition, that it can remember passed voltage-potentials somehow. (I don't know how it works exactly) This is said to be a great advance in technology because using this component a very big amount of storage capacity can be introduced directly to the processor, reducing RAM-access latencies and so on... This is the amateur explanation for what they are planning to do and I don't have any further information either.

 

The practical limit you are talking about is also as good as reached with the current technology. The frequency of the processor is directly linked to the distance, which the electric signals have to travel. So we have to make our Chips smaller and smaller. This is were we'll hit the the limit, because in some creationprocesses light is used to build the structures and as soon as the size of the structures is in about the same scale as the wavelength of the light, diffraction prohibits accurate working.

 

So if that Memristor thing doesn't work out, I guess it's up to the programmers to make better use of the potential of multicore systems, to make our computers faster...

Link to comment
Share on other sites

I've been running a dual core Quad for about a year now. 2.4 ghz.

 

I came off of a 900 mhz so yeah, the 2.4 thing makes it faster. But I think the dual core quad does alot too.

 

Just a test I did when I first got it was starting as many programs at once as I could, all one click quick launch...

 

Used to take a few minutes to open photoshop. Now I can open Photoshop, Max, I tunes, IE, Inkscape, DarkRadiant, Doom3.... at the same time (as fast as I can click) and they are all open in about 30 seconds. They all get split onto different cores, but how often do you do that?

Of course it's got hyperthreading so newer programs that use it can run faster.

 

But I bought this knowing new tech would come out and I'd be happy with this set-up for at least 5 years, possibly longer.

Dark is the sway that mows like a harvest

Link to comment
Share on other sites

The practical limit you are talking about is also as good as reached with the current technology. The frequency of the processor is directly linked to the distance, which the electric signals have to travel. So we have to make our Chips smaller and smaller. This is were we'll hit the the limit, because in some creationprocesses light is used to build the structures and as soon as the size of the structures is in about the same scale as the wavelength of the light, diffraction prohibits accurate working.

 

Not exactly. The lithography limit may exist way out in the future, but with current technology, Moore's Law is still on track in terms of increasing component density*. However, the actual computing power (operations per second) is not increasing at the same rate as component density anymore. We're starting to be limited by the speed of accessing RAM, the fact that many applications don't benefit from parallel processors, heat dissipation, and other issues with moving data around from RAM to CPUs or from core to core. A new buzzword is "balanced computing," where you try to balance the speed of accessing RAM with the speed of the processor (in other words, RAM transfer has to get a lot faster to catch up).

 

One limit we'll definitely hit in the near future is the speed of electronic interconnects. The current interconnect technology of copper "wires" on the chip is estimated to top out at about 15-20 Gbps**. Optical interconnects (moving data around as light instead of electrons) may overcome this limit.

 

* http://en.wikipedia.org/wiki/Image:Transis..._Law_-_2008.svg

** http://www.deviceforge.com/articles/AT3588366215.html

Link to comment
Share on other sites

I confirm that the main difference at this time is the ability to multi-task many complex programs. And I'm not sure what OS's do this apart from Vista 64. It helps me hugely to run both Dark Mod and Dark Radiant together windowed - though you need a decent graphics card too - and this was the main reason I upgraded.

 

This lack of individual program advantage is mainly because not much software makes use of the multi-cores so basically each program is just using one.

 

Mine is AMD quad core 2.61GHz. I can see the performance of each separately in Task Manager. I just got a program not tried yet which claims it lets you choose which to use for which programs. Not sure how if Windows normally manages this or the processor.

Link to comment
Share on other sites

After reading the last couple comments it's pretty clear my understanding of CPUS is pretty much dwarfed here :P But this website is generally how I compare all the hardware I look at:

www.tomshardware.com

It has charts to compare everything, harddrives, cpus and gpus.

 

If you go here:

http://www.tomshardware.com/charts/desktop...chmarks,31.html

You will see that there's a list of applications and they've ranked the performance based on the different CPUS. So this may help you SneaksieDave :)

 

There's no I7 ranking yet, but from what I've read about it, I'm excited to see it coming out because it looks like it kills the competition. Sorry AMD lovers ;)

Link to comment
Share on other sites

From what I've heard Tomshardware is not impartial. So I wouldn't give too much about what they write.

 

And about the lithography again, I am not an expert myself, but our professor told us, that the limit will be breached sooner than we think. There isn't much way left to go... But of course we haven't hit the limit 'till now, so you're right about Moore's Law.

Link to comment
Share on other sites

* Are they just more processors stacked on the chip?

Yes.

 

And if so, does that really help much?

No. :P As noted, most current apps aren't sufficiently parallel and so can't take proper advantage. This may change eventually. In the meantime, multicore only really helps for multitasking (and for the few apps which are parallel - a small handful of games for example).

 

Which doesn't mean it's not useful - it can be very useful. But the specific performance gains, if any, depend heavily on the technical nature of the workload you're giving it.

 

* What the hell happened to the recent (maybe 6 months or so?) announcement (by Intel IIRC) where they said a "new breakthrough" allowed for processors thousands of times faster than today's?

Still in R&D I imagine. Don't hold your breath. ;)

 

I just got a program not tried yet which claims it lets you choose which to use for which programs. Not sure how if Windows normally manages this or the processor.

Handling allocation of processes to CPUs is one of the operating system's jobs. I suspect all that program is doing is asking Windows to assign specific processes to specific CPUs, pretty-please-with-a-cherry-on-top.

My games | Public Service Announcement: TDM is not set in the Thief universe. The city in which it takes place is not the City from Thief. The player character is not called Garrett. Any person who contradicts these facts will be subjected to disapproving stares.
Link to comment
Share on other sites

Handling allocation of processes to CPUs is one of the operating system's jobs. I suspect all that program is doing is asking Windows to assign specific processes to specific CPUs, pretty-please-with-a-cherry-on-top.
Any idea what criteria it applies? If I had several programs open, a text editor, a calculator, TDM, Dark Radiant, Web browser, email program, I'd like to think it wouldn't put TDM and DR on the same core. (The existence of this program suggests it might do.) Can it, and does it, change while a particular program is running, eg, if its processor useage increases, or only allocate it at program launch?
Link to comment
Share on other sites

Any idea what criteria it applies?

I don't know the details. The fine detail is probably secret, knowing Microsoft. There's a scheduler in the OS which allocates CPU time to tasks (even in single CPU systems) and I expect it's reasonable. There's no technical reason as far as I know why it can't swap tasks between CPUs.

 

Assigning processes to CPUs manually is kind of a last resort and shouldn't be necessary most of the time. The scheduler should already be smart enough to keep TDM and DR on different cores, for example.

 

(I'm using "task" here to refer equally to processes and threads.)

My games | Public Service Announcement: TDM is not set in the Thief universe. The city in which it takes place is not the City from Thief. The player character is not called Garrett. Any person who contradicts these facts will be subjected to disapproving stares.
Link to comment
Share on other sites

* How much faster are these duo/quad/whatever cores? Charts which demonstrate?

 

That depends on the type of application. Assume that you have a quad with 2.6GHz then in worst case, the application runs at 2.6GHz. In best case you might get 4x2.6GHz (not really because of overhead, but closer).

 

On my 2GHz Pentium 4 I tried to run Need For Speed Prostreet and it was a slideshow (even already the menu). On my new machine it runs totally smooth on highest settings. Of course I also bought a new gfx card along with it and additional RAM, so this alone might not be the deciding factor.

 

With the advent of new games they will increase to employ more parallel strategies and some already do.

 

* What the hell happened to the recent (maybe 6 months or so?) announcement (by Intel IIRC) where they said a "new breakthrough" allowed for processors thousands of times faster than today's?

 

Don't know which breakthrough was meant, but there are some ideas already in research, like quantum computers, which are MUCH faster then any classical machine. Of course it also needs new paradigms for software development as well. :)

Gerhard

Link to comment
Share on other sites

And about the lithography again, I am not an expert myself, but our professor told us, that the limit will be breached sooner than we think. There isn't much way left to go... But of course we haven't hit the limit 'till now, so you're right about Moore's Law.

I'd have to see exactly what limit your professor was referring to, but research on improving lithography seems to be going strong. Rresearch in extreme UV and x-ray lithography aims to lower the wavelength (not sure what the current status of that is). You can get ridiculous resolution with electron-beam lithography, but it is expensive and takes a long time to write large areas. In the past there was an effort to develop a more parallel and therefore cheaper e-beam writing technique called SCALPEL, abandoned after the bust but might be resurrected. Nano-imprint lithography is also proposed as a much cheaper way of getting resolutions normally obtained only by e-beam. There are even crazier things like plasmonic antennae that can confine light to sub-wavelength dimensions, and other near field lithography techniques (with near field microscopy, it's possible to resolve features much smaller than the operating wavelength, and people are trying to do near field lithography as well).

 

I think there are still conceivably breakthroughs to be had in lithography. According to Moore, the ultimate of his law is when you get down to molecular-scale transistors. However, supposing we could pack molecule-sized transistors together on a CPU, that won't actually increase operations per second when there's too much heating or interference between transistors to be useful, or when the CPU can't pull in data fast enough to process with all those transistors because the interconnect between CPU and RAM can't keep up.

 

To explain where I'm coming from, I've gone to conferences where people plotted Moore's law, still going strong, and on the same plot showed operations/second, which is not increasing nearly as fast as transistor density and has kind of petered out over the last few years. The conclusion is that the performance of machines you can currently buy is limited by something other than transistor density. These conferences are biased towards optical interconnect research, so usually interconnect bandwidth is cited as the main issue.

Link to comment
Share on other sites

Interesting, I didn't know about that plasmonic antenna aproach so far. Aparently google didn't return anything usefull. The best hit was the plasmonic laser antenna with a wavelength of approximately 0,8µm, which would be pretty sucky for ICs. :D Do you have a paper of some sort about it? I've done some investigations and it apears like the current theoretical lower bound for the componentresolution is 5nm using the nano-imprint-lithography, which would be just one order of magnitude smaller than the current technology. But we'll see what time will bring... :)

Link to comment
Share on other sites

This is from CalTech in 2002, "Plasmon printing – a new approach to near-field lithography." It claims that it "promises" a factor of wavelength/20, and they're using visible light. It seems to cite 10nm as the best obtained in 2002. [EDIT: This particular approach does not directy translate to lithography, because it only works where you have already deposited metal nano-particles, so you would need some practical way of patterning the particles to make it useful]

 

http://kik.creol.ucf.edu/publications/2002-kik-mrs.pdf

 

I don't think there's any fundamental reason that we could never get atomic-scale transistors. We've already assembled individual atoms to spell out "IBM" by pushing them around with an AFM tip, it just takes a really long time. Who knows, maybe someone will invent some super fast array of AFM tips to rapidly assemble atomic-scale things. The main point is that interconnect technologies will limit actual performance (flops/second) before we hit any fundamental limits on transistors per unit area. We can keep inventing new lithography techniques and pack transistors more densely, but it won't help if the data can't get to/from those transistors as fast as they can process it, or if the chip just overheats when you try to use it.

Link to comment
Share on other sites

An in praxis, AMD is currently trying to move from 65nm to 45nm (Intel is already there but I haven't heard about any talks for going lower - yet) - so all that research is great, but you can't really buy the stuff.

 

In the meantime, the real speed-ups come from packing 4 or 8 cores on one CPU, or from packing 128 instead of 16 shaders on a GPU. :)

 

Unfortunately, the software side isn't ready to take advantage of that except in some special cases :(

"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man." -- George Bernard Shaw (1856 - 1950)

 

"Remember: If the game lets you do it, it's not cheating." -- Xarax

Link to comment
Share on other sites

Despite the actual clock frequency staying roughly the same, processors (even single core) are still getting faster, as there are many other factors influencing processing speed: on-die cache size (I think my new laptop has 6 MB cache right now), FSB clock speed, bandwidth, improved memory controllers (Intel made the same step as AMD did a few years ago and moved the memory controller onto the die), faster memory technology plus the processor design in general (better "algorithms").

 

They are also introducing specialised instruction sets all the time, which applications can use without going multi-threaded, but the application needs to be aware of those instructions. Parts of the D3 code is actually taking advantage of some of these vendor-specific instructions.

Link to comment
Share on other sites

  • 2 weeks later...
I haven't followed this type of technology news for years (O, RLY? I'm playing TDM on a 1.4GHz proc), so I'm always left with the same wonderings:

 

* How much faster are these duo/quad/whatever cores? Charts which demonstrate?

* Are they just more processors stacked on the chip? And if so, does that really help much?

* Are we in fact then at some theoretical or practical limit, and not going to exceed 3.x GHz anytime soon?

* What the hell happened to the recent (maybe 6 months or so?) announcement (by Intel IIRC) where they said a "new breakthrough" allowed for processors thousands of times faster than today's?

 

Go to tom's hardware (tomshardware.com) if you wants charts, benchmarks, and comparos.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recent Status Updates

    • OrbWeaver

      Does anyone actually use the Normalise button in the Surface inspector? Even after looking at the code I'm not quite sure what it's for.
      · 7 replies
    • Ansome

      Turns out my 15th anniversary mission idea has already been done once or twice before! I've been beaten to the punch once again, but I suppose that's to be expected when there's over 170 FMs out there, eh? I'm not complaining though, I love learning new tricks and taking inspiration from past FMs. Best of luck on your own fan missions!
      · 4 replies
    • The Black Arrow

      I wanna play Doom 3, but fhDoom has much better features than dhewm3, yet fhDoom is old, outdated and probably not supported. Damn!
      Makes me think that TDM engine for Doom 3 itself would actually be perfect.
      · 6 replies
    • Petike the Taffer

      Maybe a bit of advice ? In the FM series I'm preparing, the two main characters have the given names Toby and Agnes (it's the protagonist and deuteragonist, respectively), I've been toying with the idea of giving them family names as well, since many of the FM series have named protagonists who have surnames. Toby's from a family who were usually farriers, though he eventually wound up working as a cobbler (this serves as a daylight "front" for his night time thieving). Would it make sense if the man's popularly accepted family name was Farrier ? It's an existing, though less common English surname, and it directly refers to the profession practiced by his relatives. Your suggestions ?
      · 9 replies
    • nbohr1more

      Looks like the "Reverse April Fools" releases were too well hidden. Darkfate still hasn't acknowledge all the new releases. Did you play any of the new April Fools missions?
      · 5 replies
×
×
  • Create New...