Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Bug

Intel squashes Rambus Bugs 77

Fooster writes "According to this article in Forbes, Intel has indentified and solved the problems in the i820 chip for Rambus. Few details on the nature of the solution. " As Forbes points out, the challenge is getting OEMs back on board - I'd be skittish as well.
This discussion has been archived. No new comments can be posted.

Intel squashes Rambus Bugs

Comments Filter:
  • ... when they could just go with AMD. Cheaper. Better. Different.

    --
    Child: Mommy, where do .sig files go when they die?
    Mother: HELL! Straight to hell!
    I've never been the same since.

  • It's funny the company that came up with it couldn't fix it first. btw-- did anyone else see that wonderful Microsoft "news brief" on the side bar?

    I'm not looking forward to propritary memory modules.
  • Forbes mentions that the challenge will be to get OEMs back on board...

    However, I submit that this will be proof of Intel's monopoly hold on the chip market. They will have all OEMs back on board and satiated in no time.

    Imagine what would happen to a smaller company if they scrwed up this bad....they'd be gone forever.

    Oh well...i don't even care that much...just felt like pointing out the obvious.

  • Rambus, the company that developed the technology was not the source of the problem. It was Intel, and their 820 chipset, which is why it was the duty of Intel to fix the problem. So there.

  • Seems the Rambus engineers had been using the "Mars Orbitors for Dummies"(TM) book written by NASA engineers [slashdot.org] for writing their specs.

  • by ecampbel ( 89842 ) on Tuesday October 12, 1999 @09:26AM (#1620096)
    Rambus is another causality in the PC world where the best technology seems to get passed up for either the current technology or cheaper technology. The best technology doesn't necessarily dominate. The MacOS is a good example of this, as is the FireWire bus. Despite Intel's backing I would bet that this technology will only be used is niche market of high-end servers. The $200 PC's of the world will never want to pay a premium for a small increase in performance. The current SDRAM technology will be tweaked for years to come, and Rambus will never be the dominant standard.
  • Intel had things all their own way for a long time, and wasted that time with stupid dudes in Typar suits. As an ISP, my Intel penetration is minimal... they just don't have anything I want.

    As an OEM, Intel has alienated me in much the same way MS has. Intel has nothing to sell me anymore... they have lost the performance edge, and the shoddy and hurried design is showing through their cracks. RAMBus, while a nice idea, was not executed well, and Intel deserves to take it on the chin. Didn't a major RAM mfg company just switch thier production line to DIMM production?

    The industry will eventually come back to RAMBus, but this is just the wake up call Intel, and more importantly its' competition, needs.

    The minute Intel started selling its product like beer and cars, I knew they were in deep trouble... the only reason you do that is to hide the fact that they have no real technically compelling reason to give you, to buy a new system.

    Intel P-III processors do NOTHING to enhance your Internet experience.

    harumph
  • Rambus is dead. Rambus has numerous drawbacks, such as a higher manufacturing cost and licensing fees. DDRAM has a nice window to kill it. DDRAM is basically SDRAM with double edged clocking so it can operate at speeds up to around 266MHz. It also costs little or nothing extra to produce than SDRAM and there are no licensing fees or royalties that need to be paid. Expect some non-Intel chipsets to appear shortly that take advantage of this.

    Furthermore, most of the memory manufacturers are not supporting Rambus. Samsung just dropped Rambus and is going back to SDRAM.

    Intel really shot themselves in the foot WRT Rambus.
  • We knew that eventually Intel and Rambus would solve the glitches with the 820 chipset. But the big question who wants 820 based PCs? Intel is unlikely to be able to show much performance benefit of DRDRAM over PC100 let alone PC133 or DDR. Most PC programs play quite nicely in a two level cache hierarchy and don't much resemble McCalpin's STREAM memory bandwidth benchmark. DRDRAM has a lot of latency problems, especially those related to management of which memory devices are kept active and which are put into low power modes. Also since DRDRAM yields at 800 Mbps/pin are so abysmal most mainstream 820 based PCs shipped in the next few quarters will have down-binned 600 or 700 Mbps RIMMs in them which will make them look bad compared to PC100 SDRAM.

    Now what about cost? It is estimated that a typical 820 based PC will have about a $250 cost premium over a 440BX based PC with PC100. What is the cost difference to the PC buyer $300? $400? There are no simple fixes for this problem. The
    economics of DRDRAM hurt in so many places - larger die size, low AC functional yield, sky high test costs, uBGA packaging costs, module and motherboard costs (you need special PWBs with tightly controlled characteristic impedance for those rambus transmission lines). These problems are so significant that the only DRAM vendor still building DRDRAMs is Toshiba and that is because it is contractual supplier to Sony for Playstation 2's.
  • by severian ( 95505 ) on Tuesday October 12, 1999 @09:36AM (#1620100)
    I think that regardless of whether Intel fixes the RamBus bugs or not, it's a poor technology. The problem with memory speed breaks down to two issues: latency and bandwidth. Although everyone salivates at the thought of enormous bandwidth, in today's systems, what really causes problems is memory latency. And ironically, although RamBus promises higher bandwidth than SDRAM, it's actually has *higher* latency.

    There are some great articles regarding bandwith vs. latency [tomshardware.com] in general and RamBus [tomshardware.com] in particular at Tom's Hardware [tomshardware.com]. To summarize the articles, even today's current SDRAM architecture provides more than enough bandwidth, especially with the current sophisticated cache systems that reduce memory accesses dramatically. However, what's tying up the CPU is latency, especially as CPU's get faster.

    In other words, CPUs generally request small amounts of data with any given request, but it has to wait a long time for that request to get back. As CPU speed has increased, better cache systems have mitigated the resulting increased bandwidth demands but nothing has helped the resulting latency problems. So the way to speed up memory is to decrease latency and don't worry too much about bandwidth just yet. Unfortunately, RamBus goes in the exact opposite direction.

    That said, I guess we should never underestimate the power of a behemoth like Intel to force acceptance of poor technologies :-/

  • by overshoot ( 39700 ) on Tuesday October 12, 1999 @09:41AM (#1620102)
    From what I've gathered of the signaling problems with Rambus there are only two main ways they could 'fix' the present problem, and neither is exactly a fix.
    • Increase the trace separation to reduce the crosstalk and pattern-dependent impedance mismatches. Along with this an adjustment in termination resistance might be some help.
    • Change the trace lengths to move the resonant point away from the line fundamental frequency. Apparently a resonant mode near the signaling rate was one of the main gotchas.

    In both cases new motherboard layouts will be needed, and since both will take up more space the whole floorplan may change. At best, this will take a few months to get the MBs designed, through validation and regulatory approval (not a trivial issue with this kind of bandwidth!) and into the production pipe. Kiss Q4 goodbye and probably Q1; the memory shops won't be seeing any demand until Q2 at soonest, if at all.

    On top of that there will always be the charming issue (which Rambus seems to have in other areas as well) that the operating area for the memory subsystem will have a Swiss-cheese character. Instead of a 'schmoo' plot, with a maximum frequency of operation and constraints on voltage and temperature, there will be areas of operation and failure, alternating. Maybe 300 MHz and 800 MHz will be OK, but 700 will be out. In fact, that seems to be the situation right now.
  • Who is petty enough to give a damn about having a first post?
  • laymil wrote:
    Rambus, the company that developed the technology was not the source of the problem. It was Intel, and their 820 chipset, which is why it was the duty of Intel to fix the problem. So there.

    Actually, there doesn't seem to be all that much wrong with the 820, aside from some speed limitations in the RAC. The big gotcha is a PWB-level signal integrity problem with the reference implementation of Rambus which wasn't anticipated by the relatively superficial signal-integrity analysis that the Rambus gang did.
  • Firewire is currently the accepted standard in the Digital Video, a niche no doubt, but still gives it a strong toehold. I don't know whether it will 'take off big', but it is currently being used for things that it's good for.
  • Firewire is currently the accepted standard in the Digital Video, a niche no doubt, but still gives it a strong toehold. I don't know whether it will 'take off big', but it is currently being used for things that it's good for.
  • Don't uncritically accept the story that Rambus is a superior technology. Dell (a devout Rambus house) did some very interesting benchmarking, and although Intel and Rambus had the results censored from the IDF procedings you can see a copy at InQuest Market Research [inqst.com]

    Bottom line: Rambus appears to be substantially (like 15-40%) slower than PC100 SDRAM for typical applications. Oops.
  • Actually, I find the whole latency argument specious at best. The whole point of serializing RAM allows the increased bandwidth _and_ clock speeds. Thus for this type of argument you may contend that the latency of Rambus is 30% (I'm not sure what the actual number is) longer than it's current DDR SDRAM competitor, but just remember that Rambus will easily scale upwards 30% faster than SDRAM. This _negates_ any latency advantage that SDRAM had, but also gives a huge pipeline burst advantage to the Rambus component, especially as the speeds increase. It's similar to the PowerPC versus Alpha comparison. The PowerPCs generally had short pipelines versus the Alpha long pipelines (usually more than double the length). This allows the lower clocked PowerPC to keep up with fast clocked Alpha for the most part, but the Alpha is easier to manufacture at higher clocks than PowerPCs. Just look at the '2nd gen' motorola G4 chip, moving from a 5 stage pipe to a 7 stage in order to make clock up easier. Rambus may be roughly equal to SDRAM today, but once the speeds crank up Rambus will leave SDRAM in the legacy parts bin.
  • Intel P-III processors do NOTHING to enhance your Internet experience.

    However, that was what people said about MMX when it first came out, too (myself included.)

    Right now the lower-end PIII processors are slightly higher than the highest end celery chips. I'm sorry, but I never buy an SX-class processor when the next level up is only about $40 more.

    And I don't regret anything I've bought recently more than I regret buying a K6-2 processor from whatever-their-name-is. That clone processor company.
  • You're using Mac OS as an example of "current technology or cheaper technology", right? Cooperative multitasking and dreadful instability are hardly the hallmarks of anything I'd call "the best technology".

    - A.P. (I agree with the rest of the stuff, though. Rambus is for servers, let it linger and die there.)
    --


    "One World, one Web, one Program" - Microsoft promotional ad

  • To be fair, Rambus has been underperforming compared to expectations. Rambus even at full speed has shown only small gains over pc100 memory on most applications, and looks like a tossup against PC133 memory, and looks to lose once PC266 memory is commonplace. To claim it is the best technology is a stretch, when it is ambiguous at best. Visit Tom's Hardware and look at the benchmarks.

    And again, MacOS is also not in any way clearly superior. It has significant flaws, and had sufficient flaws to make it ambiguously better even back before win95. (multitasking, memory protection, licensing issues, power user & developer support).

    Not that I'd claim superior technology always wins, but those are two terrible examples.

    Firewire is more interesting. It will be interesting to see whether firewire or USB 2 winds up as a dominant standard, since firewire is clearly superior except for licensing issues. (And licensing issues can be a killer ... no one wants to commit to a technology that can be swept out from under them).
  • by SlydeRule ( 42852 ) on Tuesday October 12, 1999 @10:31AM (#1620116)
    I'm not clear on how Rambus is supposed to be "the next step in the chip-set food chain".

    Intel has (IIRC) said that Rambus won't be used on Celerons, won't be used on 100-MHz FSB P-III's, won't be used on Xeons, won't be used on Itania (no I won't say Itaniums), and won't be used on systems with more than two CPUs.

    So here we have a memory technology which is limited to 1- and 2-processor 133-MHzFSB Pentium III's. Those systems don't need Rambus, since they can work with PC-133.

    Rambus claims to be faster than PC-133, but over and over again the benchmarks refuse to confirm that.

    Where's the future in Rambus?

  • In 1984 MacOS sure was superior technology to MS-DOS. And it remained that way until Win NT was introduced. I think MacOS is still better than Win 9x which is the biggest seller in the market by far.

  • I'm not clear on how Rambus is supposed to be "the next step in the chip-set food chain".

    I really don't believe they have any intention of it being something of worth into the future as much as I see them just trying to fragment the x86 platform further than it allready is.
  • That's something that I am curious about. Is AMD making any kind of contributions to the Linux community or pumping money into a distrobution? It seems to me that it would be in their best interest.
  • Trivia:

    RDRAM is also used in the N64.
  • I'd like to see Firewire succeed as some kind of universal peripheral bus, but it's hard to see what need it fulfills _for_the_typical_PC_user_ (major emphasis on typical) outside of the digital video editing field. I think USB is getting shoehorned into merely replacing the PC serial, parallel, and PS/2 ports for low-speed devices that the masses use, like low-budget scanners, printers, joysticks, keyboards, ad nauseum.

  • Oh, that's easy! In biology, when something is said to be higher up the food chain, they mean that it's more complex and gives lower returns if consumed.

    Sounds a very accurate description, if you ask me.

  • They don't have two pennies to rub together.
  • It also costs little or nothing extra to produce than SDRAM and there are no licensing fees or royalties that need to be paid.

    Then how come the new vid cards that use it sell for $100 more than the ones that don't? Just greed? Product differentiation?

  • From what I've gathered of the signaling problems with Rambus there are only two main ways they could 'fix' the present problem, and neither is exactly a fix.

    I had wondered, myself, when this whole Rambus issue was first posted on /., how Intel could "fix" this problem. I mean, I understand attempting to fix it for future units to be shipped, but what about units that have already been shipped and have the problematic Rambus technology implemented in them? I had figured that, it being a hardware problem, they'd have to live with it, so my next thought is this:

    Would Intel consider recalling those affected units and replacing them?
    I'm going to assume no, tragically enough, but I wish they would. Those people who have received such poor, faulty equipment shouldn't have to live with it. I also realize that I'm probably making it to sound worse than it really is, but I'm just wondering:
    1. How big of an issue is this (for those who have the faulty MBs/etc).?
    2. What is Intel planning to do about what they've already shipped?
    3. Do we really care about Rambus? (from the other posts, I would guess no, and I know I don't want any Intel crap/Rambus stuff.)
    4. Did the people who purchased the faulty Intel/Rambus equipment know they were buying Rambus technology? If so, did they buy it for the Rambus claims? (I realize this is a question we can't really answer b/c we don't know what was going on in their minds when they purchased said hardware.)
    I also realize that most of those questions are rather poorly thought out questions, and of little import, but I was just wondering...
  • crack comes in bags? i thought it was a liquid you had to boil into a solid, then smoke. i'm probably wrong though.
  • The N64 uses the original rambus technology. The stuff Intel is trying to shove down everyone's throat is direct rambus - two generations beyond the Nintendo memory.
  • Then how come the new vid cards that use it sell for $100 more than the ones that don't? Just greed? Product differentiation?

    Currently there isn't much supply of double data rate RAM so prices are high. But there's no licensing fees like there are for rambus so once DD gets ramped up, prices should be about the same as for standard SDRAM.
  • Another poster said in response to the above post that [the reason you haven't seen AMD back any distribution of Linux] is they "don't have 2 pennies to rub together."

    That's true enough, as things go (AMD seems to consistently release great chips which feature a "Lower than expected quarterly earnings" bug), but supporting a Linux distribution is not the same as throwing money in to a blender just to watch the pretty paper shred. In fact, AMD places advertisements (that's a very real cost of business!) and supporting a Linux distribution would be great advertising for them.

    Now I work in advertising for a big one-syllable computer maker that rhymes with Hell and so far does not make any computers with AMD Inside, though I think they should.

    If AMD would sink as much into a single distribution of Linux as it does in a few days of straightforward advertising, the returns would be large and lasting. A company which supports linux and makes what mainstream publications (like PC World) say is the fastest chip they've ever run in a desktop might have a great following ...

    Goodwill is more important than companies seem to realize, though.

    But if say, SuSE linux were to feature a big graphic on the box that said "This product rules with Athlon processors!" (it's sort of plausible, considering that AMD has at least one factory in Germany), I think it would be cool.

    Just a thought. Anyone from AMD listening?

    timothy


  • AMD cop Intel's FUD all the time. The general public is currently deluded into thinking that Intel's processors are SO much faster then AMD's. They say that AMD processors are unreliable. They say that AMD processors are slow. The truth is, a K6-2/400 is almost equally as fast as a PII/400, and easily just as reliable. And now Intel are trying to tell us that the new Athlons are slow.....

    What has this got to do with the i820? Well, all of the OEM's will get back on board, and it is because of this FUD campaign. Due to Intel's monopoly of the market, and the FUD about reliablity and speed, OEM's have to use Intel chipsets with Intel processors. The computer illerate family is going to buy a Pentium computer, because thats the one they have heard of.

    This stinks. Intel is the M$ that people don't hate as much. Well, if you like to support the little guy, go AMD. With the Athlons shitting over everything Intel has at the moment, make your next PC "Athlon Inside".

    M$'s domination has gone too far. Intel's domination has gone too far. AMD boxes with Linux are the way to go, unless you like giving monopolies your money....
  • That just goes to show you how bad the crack is..
  • unfortunately, AMD refers to their cpu's as Microsoft Windows compatible processors. Of course, that's to distinguish between some of their other chips that don't run windows but still...
  • MCA vs ISA - MCA was prprietary IBM technology that was definitely faster than ISA, but for what was out back then ISA was more than good enough.

    Beta vs VHS - This was mentioned during the previous article on the Rambus problem, but deserves mentioning again. Beta was better, but how many could actually tell the difference, and how many wanted to pay for a marginal difference.

    PC vs MAC - Of course MACOS has weaknesses that are well described, but it was well ahead of DOS.

    In the end the succesful technoglogy was the one that had the blend of "just enough performance to do what I want" and low cost. Rambus doesn't do it, PC133 barely does, mostly because the marginal cost of PC133 vs PC100 isn't too bad. PC266 probably has sometime to go just because there isn't a killer app for it.

    Rambus only has a chance if the follwing conditions are met:

    1) There is a killer app that requires multiple rambus channel type speed.
    2) There is not a cheaper alternative that is adequate.

    Neither conditon exists today in the mass market PC or workstation. And, even with servers a single Rambus channel really isn't anything special.

    Dastardly
  • Not really. Even if RAM-DRAM(why don't they call it that, RAMbus Dynamic Random Access Memory?) in any case even if RDRAM gets a 50% bandwith increase, most games (what do you think pushes computers in consumer space?) have less than 150MB/sec bandwidth. Say the proc wants access to 1K of data. It wats 50 clocks for RDRAM and 30 Clocks for SDRAM. Even if RDRAM had 10GB/sec of bandwidth, it would not counter the fact that for a small piece of data such as this, their would still be the overhead. True, if you are doing streaming memory tests, RDRAM is faster, but increase the bandwidth all ya want, latency is not negated. Consider FPM DRAM (remember that) The whole reason that EDO replaced it because it had lower latency. (keeping the pages open longer allowed subsequent memory access to skip a lot of overhead I think.)SDRAM replaced EDO because that proc does not have to wait as long to get data since SDRAM is ready to transfer when the proc is ready to revieve. (Hence synchronus.)
  • PIII DO nothing to enhance the Internet experiance. Does loads for gaming but internet is easier to sell. And since when did MMX help anything except photoshop? The PIII is decent but not enough considering that the K7 (I refuse to say Athlon) is kicking its ass all over the place. And yes, if you need FP the K62 sucks. But otherwise, it is a decent processor.
  • No, they should be pumping money into Be. To tell the truth, Be demos are a lot more impressive than Linux demos.
  • It's a crying shame that AMD is set to make the same mistakes with the K7 (relatively speaking) as they did with the K5 design (the K6 was orignally designed by NexGen). In those days, integer operation was where it was at. So they focused on Integer performance. What happened after that? Quake and the whole 3D accelleration explosion. Processors with strong FPU's performed better. AMD could'nt have seen that one coming.

    Now, they seem poised to get themselves in deep crap here in much the same way by focusing on letting everyone know how good of a job they are doing at running Windows and Windows applications. I feel this is a mistake in progress. It Intel is doing what I think, making relationships with various players in the Linux community, or just simply buying them, it would be in AMD's best interest to do a little of the same.

    Linux would be a very good move for AMD. The simplest association of their name with an important project or distro would help them immensely.



    Big Din K.R.
    "If you're not on the gas, you're off the gas!"
  • thats what prefetch instructions are for. The idea is that the programmer (or the compiler, hopefully soon) will know better what memory might be needed. So by increasing your memory bandwidth you get the ability to preload into cache the data needed by the branches coming up. Hence for an optimized program (and a large L2 cache), the 'observed' RDRAM latency drops to zero. This is a big feature of Merced (and a feature of SIMD, although that is both optional and P3-only)
  • Does anyone know who AC got his Athlon system from? It doesn't say in his diary. But, if it were form AMD I would call that support for linux.
  • They have said there are no plans to use Rambus in Merced? when/where they say this?
  • By the way, sign the Athlon Motherboard support petition here [petitionpetition.com].

  • Your 50 clock/30 clock example makes an erroneous assumption. Let me put it this way: chip A, clocked at 50MHz has a 50 clock latency cycle while chip B clocked at 30MHz has a 30 clock latency cycle. How long does chip C have to wait for it's request to be fulfilled from Chip A or B?

    Your error is assuming that latency is measured only by clock cycle delay on the processor side, when the latency is the actual time it takes from going to the RAM instead.

    If SDRAM can keep up with Rambus technology two years from now I'll be mighty impressed, but Rambus will probably be cheaper and faster at that point, with similar latencies but much more bandwidth. SDRAM just hasn't run out of steam at this point...
  • What killed BetaMax was that the tapes weren't long enough to record feature-length films.
  • Pretty much the whole measuring delay by clock cycles is pretty ambiguous. The correct way is to use measure the latency in ns, then at least you can compare across clock speeds.

    It is interesting to note that if you look sat the first cycle delays in DRAM from FPM to EDO to SDRAM they are pretty consistent. 60ns FPM was 5-3-3-3 at 66MHZ and 70ns EDO is 5-2-2-2 (4-2-2-2 at 60ns) at 66MHZ, at 100MHZ SDRAM is 5-1-1-1-3(2)-1-1-1. Note the middle number is the CAS we hear about not the first one. So, first cycle number has been from 70-50ns from FPM to SDRAM, but that can be attributed to process technology improvement, there really hasn't been an architectural improvement in latency.

    You also have to specify which latency you are talking about especially with SDRAM and DRDRAM because the latency depends a lot on the access pattern.

    So, before we get inot arguments, let's make sure there is a consistent measurement. Just one point DRDRAM has numberous latency numbers depending on what is being accessed and when.

    I don't see any possibel way DRDRAM could be cheaper than (DDR)SDRAM on the same process given similar economies of scale. The extra die area of the DRDRAM alone kills that, then combine it with royalties on the chips, RIMMS, and chipsets, and it is absolutely impossible for DRDRAM to cost less tham SDRAM all other things being equal.

    Also, JEDEC is working on memory that will be faster than DDR-SDRAM leveraging on work from the SLDRAM group, and going even beyond that. And, the JEDEC designs don't carry the baggage that DRDRAM carries.

    Dastardly
  • FireWire is much better than SCSI or SCSI2 for storage devices (UW SCSI2 may be faster but its a lot more expensive e.g. $80 cables!)

    fast, plug -n- play, hot pluggable, daisy chainable, the FireWire bus can provide power for some devices, what more could you ask for?

    And 800 Mb/s FireWire standard is just around the corner.

    Its time to dump SCSI. Its served well for 15 years now. Lets move on to something better...

    (my $.02)
  • And ironically, although RamBus promises higher bandwidth than SDRAM, it's actually has *higher* latency.

    Gee. . . just like Quantum Bigfoot drives promised more capacity, yet had higher latency... >:o)

    I guess we should never underestimate the power of a behemoth like Intel to force acceptance of poor technologies :-/

    Fine... I'll just try affording something non-Intel (and non-encumbered)... I haven't bought an Intel CPU since 1994. I haven't had such super luck avoiding Intel chipsets, however. I still run a TX in one of my house systems.

    GO VIA and ALI!

    --

  • It's not even difficult for them to do. They can say that "all new Pentium III's are only supported on 820 - they might work on other chipsets but we will not provide tech support." This is completely legitimate and is enough of a message to OEM's - most of them depend on Intel.
    Imagine what would happen to a smaller company if they scrwed up this bad....they'd be gone forever.
    Yep... it's happened MANY times. All a matter of resources and how long you can last. Intel and MS can afford to make mistakes, occasionally. As long as the timing is right - what's interesting now is that with the Athlon, Intel has to move faster. That's one of the important reasons why competition is really necessary in these markets.

    --bdj

  • > In other words, CPUs generally request small amounts of data with

    >any given request, but it has to wait a long time for that request to get back.

    Wrong - the DRAMS only see the traffic on the far side of the caches - with a modern CPU using a write-allocating cache (slot 1 or new amd-thingy) you're going to only see full cache-line transfers - that's 32+ bytes/transfer - no small amounts of data. The overwhelmingly majority (>99%) of memory transactions are going to be this size.

    Instead consider the following:

    • time to transfer 32-bytes on an 800MHz 2-byte wide rambus - 32/2x1.25nS = 20nS
    • time to transfer 32-bytes on a 100MHz 4-byte SDRAMS = 80nS
    • with PC133 it's 60nS
    • with 100MHz DDR it's 40nS
    (not all these solutions use approx the same pad space/pins)

    On top of this add the DRAM access (RAS/sense) and precharge (if you can't hide it) times which are roughly constant for the different DRAMs (since they all tend to share roughly the same cores)

    I know the current RamBus technology is being run slower than 800MHz - so take these number with the appropriate grain of salt

    I suspect that Intel's suffering from bringing a first RamBus implementation to market - anything new takes a few attempts to get right :-) sadly "always plan to throw one away" isn't so practical in the silicon marketplace

    There are two things that I think Intel probably has in mind with going to Rambus:

    • granularity - at the bottom end of the market we're going to hit the same sort of wall that framebuffers hit a while back - memory systems will only need so much memory - but chips will continue to get denser - eventually you only need 1/2 a DIMM's worth - so a smaller faster bus lets you play in this space more economically (of course M$'s code bloat may mean this never happens)
    • RamBus's many more multiple banks than other DRAMs should allow more parallelism in the memory subsystem, esp with the sort of mostly random accesses you see on the other side of a cache - but for this to win you need to see a lot of concurrent accesses at the memory controller - something that I'd guess is hard on the other side of slot1 (better for integrated DRAM controllers) - and better for CPU's like EPIC
    Disclaimer: I've designed graphics systems based on both RamBus and traditional drams - I've never worked for RamBus or Intel
  • Goodwill is more important than companies seem to realize, though.

    No it's not. People won't pick a slower chip over a faster one when they are comparably priced. Fuck good will. It's about benchmarks, cost and availability.

    Allow me to illustrate my point with an outrageously obtuse analogy. Let's say you're buying a new car and looking at the Porsche Boxter and a Tie Fighter. For the sake of argument, they are the same price. The Tie Fighter is manufactured by The Empire--the same people who blew up Aldaron. The Boxster is made by a German company that supports Linux (let's just say...)

    I want the Tie Fighter. It can fly and has lasers. I would find a way to rationalize the purchase.

    Saying "Our distribution will rock your socks on the Athlon" is certainly cool, but I don't think it will help AMD as much as if they were able to produce large volumes of chips and have compatable mother boards on the market!
  • SDRAM DIMMS are 8 bytes wide (64 bits).

    That article on Tom's Hardware Guide, " Performance Impact of Rambus [tomshardware.com]" says that RDRAM's bus width had to be reduced by 75% to 16 bits (2 bytes) to run at 800MHz. Going backwards and you get 8 bytes for SDRAM (8 - 75% = 2).

    Now if we divide all your SDRAM access times by 2 we get 20ns for 100MHz DDR SDRAM which is the same as 800MHz rambus.

  • Sure its better than SCSI... SCSI has a 5mb bus. And SCSI-2, its got a 20mb if Fast/Wide. But.. well, why compare it to those - both are legacy. SCSI-2 Fast is still used on SCSI cdroms/cdrw/dvds because its all that's needed, and I assume cheaper than UW SCSI. All the drives are UW, U2W, or soon 160/m (most of Ultra3).

    In anycase, when I first heard about FireWire when it was the hot new technology, it was refered to as SCSI without being bounded by legacy support. Legacy support makes life horrible for creating the best possible. Still.. firewire will have some problems as Intel's going to put lots of marketting into USB/2.

    Oh.. and $80 for the cable bit. When you pay thousands for 10k rpm drives to make your big terrabyte servers, pay for the powerful RAID cards, the guys to make sure it stays together, etc.. paying for the cable is a bit minute.

    PS. UW SCSI is UW SCSI-3. Anything past F/W SCSI-2 is SCSI-3, UltraX (and Wide). There will be no SCSI-4.. just ultras...
  • oops you're right - my mistake - sorry - however my other points still stand - there are other advantages to rdrams in the future that have nothing to do with bandwidth or latency
  • What killed BetaMax was that the tapes weren't long enough to record feature-length films.

    Actually it was thr fact that the market was flooded with japanese VHS players costing MUCH less then the Betamax players.
    BTW, I still have some feature length films on beta somewhere.

  • and in 1985 Amiga Workbench was fully pre-emptive multitasking in 128k of ram, with dedicated custom chips for graphics and sound, displaying 4096 colors on screen, with 4 channel stereo.

    unfortunately, commodore could not sell to save their life, and was FAR FAR too late with CPU updates.

    smash (a 68020 (or better, 68030 with MMU) based amiga should have been available at a decent price in 1990 - rather than 1994 :P)

  • heh.

    The n64 was overhyped as well :P

    Maybe its an omen :)

    smash
  • I would think that all the PC ram manufacturers (Like NEC) that decided to switch production from RAMBUS and gear their production to SDRAM because of the RAMBUS problems. I think that if these manufacurers noticed the high price of SDRAM lately and begining low availability of SDRAM; they would just contiue with thier plans. I think they could make some gaurenteed money by going back into producing SDRAM. Going back into RAMBUS production is going to be a crap shoot. The big question is Does Into have the problems fixed or not? That's the $64,000 question. If they are just saying they do and they don't and OEM's all jump back on the bandwagon, they will get burned. And what do you think will happen when the OEM's get burned, and in turn the consumers. I think Intel should not move so fast on pushing OEM's because if they are just bs'ing everyone they will KILL themselves and AMD and Cyrix may just knock Intel off their high horse.
  • Wrong.

    Once CPU's go past 700 MHz in speed, the current PC100 and PC133 SDRAMs will become the big bottlenecks if you have to process very large graphics and database files. Remember, hard drive speed bottlenecks have been alleviated with ATA-66 IDE and SCSI Ultra-Wide and Ultra2-Wide technology, and graphics cards are also not the bottlenecks either (thanks to the work of nVidia, Matrox, ATI and S3).

    This why things like Rambus DRAM and attempts to get SDRAM to go even faster than 133 MHz are being developed.
  • If you are using 160/m and spending thousands on 10k drives, Fibre Channel is the competitor in that market, not FireWire. FW is Good Enough(tm) to replace SCSI substems up SCS2 UW. Since it promises to attack the consumer market, FW hard drives shouldn't command the high premium that SCSI drives command.
  • ahh, true. I forgot all about Fibre Channel. It would be nice if FW could take the middle ground between IDE and SCSI. SCSI is expensive and few make adaquate drives at a resonable price, and IDE is still to cpu intesive...

    I also haven't seen any good IDE w/ scsi chip drives for a long time. Those were great for home systems, not much more and all/most of the benefits of scsi. Nowadays its to split.. its either IDE for cheap storage, or scsi for fast/reliable storage...

All seems condemned in the long run to approximate a state akin to Gaussian noise. -- James Martin

Working...