Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Government Networking IT Technology

State of Virginia Technology Centers Down 190

bswooden writes "Some rather important departments (DMV, Social Services, Taxation) in the state of Virginia are currently without access to documents and information as a technology meltdown has caused much of their infrastructure to be offline for over 24 hours now. State CIO Sam Nixon said, 'A failure occurred in one memory card in what is known as a "storage area network," or SAN, at Virginia's Information Technologies Agency (VITA) suburban Richmond computing center, one of several data storage systems across Virginia.' How does the IT for some of the largest departments in a state come to a screeching halt over a single memory card? Oh, and also, the state is paying Northrup Grumman $2.4 billion over 10 years to manage the state's IT infrastructure." Reader miller60 adds, "Virginia's IT systems drew scrutiny last fall when state agencies reported rolling outages due to the lack of network redundancy."
This discussion has been archived. No new comments can be posted.

State of Virginia Technology Centers Down

Comments Filter:
  • HA fail (Score:4, Insightful)

    by Anonymous Coward on Friday August 27, 2010 @12:03PM (#33393622)
    How does a fault in a single SAN controller cause an outage of the entire data storage network? Expensive SAN solutions are expensive & highly redundant for reason. This smells like a "Let's buy the cheaper solution" and/or an infrastructure design fail.
    • Re:HA fail (Score:5, Interesting)

      by cgenman ( 325138 ) on Friday August 27, 2010 @12:39PM (#33394116) Homepage

      Also, this can happen when you hire an external firm to manage something that you should be managing yourself. External managers for projects like this are motivated by extracting as much money as possible from you. Internal departments of technology, by comparison, are motivated by convincing co-workers to not shout at them.

      • Re: (Score:3, Funny)

        Also, this can happen when you hire an external firm to manage something that you should be managing yourself. External managers for projects like this are motivated by extracting as much money as possible from you. Internal departments of technology, by comparison, are motivated by convincing co-workers to not shout at them.

        B-b-but you're saying that the bloated corrupt government that takes money from people at gunpoint and has no incentives for efficiency might have done a better job than a private contractor that works on the God-given free enterprise system that rewards efficiency and punishes waste! That's unpossible!

        • Re:HA fail (Score:4, Insightful)

          by ultranova ( 717540 ) on Friday August 27, 2010 @04:15PM (#33397136)

          B-b-but you're saying that the bloated corrupt government that takes money from people at gunpoint and has no incentives for efficiency might have done a better job than a private contractor that works on the God-given free enterprise system that rewards efficiency and punishes waste!

          On the contrary, the free market did exactly as it was supposed to: it eliminated the inefficiency of redundant systems and a safety margin. Efficiency or the safety of redundancy, you can have one or the other but not both. That's why any important system should be managed by the government, and free enterprise should be limited to the role of logistical optimization it's actually good at.

          Unfortunately some people nowadays consider free market their religion, so we got deregulation and resulting financial crisis. Oh well...

          • S.R. Hadden: [imdb.com] First rule in government spending: why build one when you can have two at twice the price?
            • Re: (Score:3, Interesting)

              by ultranova ( 717540 )

              First rule in government spending: why build one when you can have two at twice the price?

              And sometimes that's exactly the right approach, except you should really build three or four or ten. One might argue that that's the very purpose of the government: to force inefficiency where short-term self-interest would result in long-term disaster - in other words, to avoid the tragedy of the commons.

        • Re: (Score:2, Interesting)

          Funny, I was unaware that Northrop Grumman were a scion of the free market. Could you name some of their non-government customers that provide more than 1% of their total revenue? It's called a Military-Congressional-Industrial Complex for a reason. But thanks for playing the strawman game.

    • Re: (Score:2, Interesting)

      Step 1) Design system so a single SAN controller is the only thing keeping the network running.
      Step 2) Use money saved by not adding redundancy/designing the system correctly to give self money.
      Step 3) Expect one component to last long enough for you to leave the job before it fails.
      Step 4) ????
      Step 5) Profit anyway because they don't get the concept of failures==bad things and keep paying you.

    • Re: (Score:2, Insightful)

      by g0bshiTe ( 596213 )
      I live in Virginia, it's more like business as usual for a Commonwealth.
    • Because there was more than one failure. FTFA:

      The system was built with redundancies and backup storage. It was hailed as being able to suffer a failure to one part but continue uninterrupted service because standby parts or systems would take over. But when the memory card failed Wednesday, a fallback that attempted to shoulder the load began reporting multiple errors, Nixon said.

      Cheap solution problem? Possibly. Infrastructure design fail? Possibly, but not likely. Couldn't critique it without seeing their setup, but it sounds like they designed some redundancy in. I wonder what kind of "memory card" failed. From the description, it sounds like it might be a cache module.

      • Because there was more than one failure. FTFA:

        The system was built with redundancies and backup storage. It was hailed as being able to suffer a failure to one part but continue uninterrupted service because standby parts or systems would take over. But when the memory card failed Wednesday, a fallback that attempted to shoulder the load began reporting multiple errors, Nixon said.

        Cheap solution problem? Possibly. Infrastructure design fail? Possibly, but not likely. Couldn't critique it without seeing their setup, but it sounds like they designed some redundancy in. I wonder what kind of "memory card" failed. From the description, it sounds like it might be a cache module.

        Regular testing of redundant systems is critical. Anyone who has done disaster planning knows this.

    • Re: (Score:3, Informative)

      by Wyatt Earp ( 1029 )

      Sweet Zombie Jesus.

      If the RAM in our 8TB Netgear SAN fries it doesn't blow up my office, what the hell are they and Northrup Grumman doing?

    • Yeah really. Before we got away from traditional hardware (NAS, SAN, etc.) we had piece of crap Dot Hill arrays and they had redundant power supplies and redundant controllers. There must be more to this story.
      • Redundant Array of Inexpensive Disks. RAID. Ok, maybe that scared them. Redundant Array of Raid Controllers - RARC? Nope, sounds Chinese. How about Redundant Infrastructure Array Audits? Nope, than definitely will not do...
    • How does a fault in a single SAN controller cause an outage of the entire data storage network? Expensive SAN solutions are expensive & highly redundant for reason. This smells like a "Let's buy the cheaper solution" and/or an infrastructure design fail.

      On the plus side if the US government ever builds Skynet we know where to strike.

    • Re:HA fail (Score:5, Interesting)

      by wkcole ( 644783 ) on Friday August 27, 2010 @01:49PM (#33395102)

      How does a fault in a single SAN controller cause an outage of the entire data storage network? Expensive SAN solutions are expensive & highly redundant for reason. This smells like a "Let's buy the cheaper solution" and/or an infrastructure design fail.

      RTFA!

      The problem was a dual (or worse) failure. What the article reveals is that while they may have had all of the right hardware in place and a mechanism for it to handle the most likely failures, they were missing the 'soft' components of a good HA system: routine testing of failover and a rapid repair plan. In the auto industry where failed systems can halt factories and rack up hundreds of thousands of dollars of cost per hour of downtime, it is the norm for HA systems to have frequent failover tests, to have on-site spares for critical components that can be replaced by on-site staff, and to have support arrangements that put a skilled human on-site with replacement hardware in a small amount of time. This is why traditional "enterprise class" systems are so expensive. They are designed for rapid diagnosis and repair, and a well-run enterprise that needs truly HA systems pays for expensive HUMAN support by their own staff and/or from IBM, Sun^WOracle, EMC, HP, etc. and monitoring systems on top of that. If you fail over your HA systems every Sunday at 02:00 (or whatever time is safe...) and have the right staff, processes, and support contracts in place, you will find nearly all of the latent failures and have them fixed before a true production failure exposes them.

      The most appalling thing about this to me isn't the failure. Some systems don't have safe times for testing failovers, and I know from personal experience that a component in an HA system that was working perfectly Saturday and has been idle since Sunday can go tits-up when needed on Wednesday. The real problem is the long outage. If the clowns in the VA state government were doing their jobs, they would not have a system like this without vendor support contracts to fix well-defined hardware problems (e.g. "bad memory card" ) within a few hours at most. This was something I always loved about working in a shop with the top-grade EMC contract. The Symmetrix and its associated gadgetry would call EMC about failures and we'd have a tech show up at the DC with parts before we even noticed anything unusual: costly, but nowhere near as expensive as killing all of the SAN-reliant systems for a random day every 3 years. The 4th 9 is not cheap or simple, because it always requires humans.

      • They had pretty much the most expensive support contracts possible. The problem is that apparently all this waste of taxpayers' money has bought nothing useful.

      • by swb ( 14022 )

        If you fail over your HA systems every Sunday at 02:00 (or whatever time is safe...)

        (voice of tech ignorant executive)

        "We can't be down then. We have remote workers that want to do things at that time."

        "The overtime for that window is too expensive, and we can't do it during production hours. We'll just assume you planned carefully."

        "You just told me part of the reason that system is so expensive is that it is much less likely to fail. Well, we're not paying for a spare."

        And after hearing that, I want t

        • Re: (Score:3, Interesting)

          by hawguy ( 1600213 )

          If you fail over your HA systems every Sunday at 02:00 (or whatever time is safe...)

          (voice of tech ignorant executive)

          "We can't be down then. We have remote workers that want to do things at that time."

          "The overtime for that window is too expensive, and we can't do it during production hours. We'll just assume you planned carefully."

          "You just told me part of the reason that system is so expensive is that it is much less likely to fail. Well, we're not paying for a spare."

          And after hearing that, I want to duct tape those fucking executives to their $1500 chair and let them watch while I take a powder-actuated nailer to their precious Mercedes S550.

          Why the rage? Just spell out very clearly (and in writing) exactly what will happen if component X fails, and the cost to implement redundancy now. When component X fails and the company loses Y dollars of revenue and the CEO comes to you, just pull out the email and say "I tried to design redundancy but he wouldn't spend the money".

          It worked for me when I tried to get money for a spare battery cabinet on our primary UPS. I told my boss that if a single battery in the string fails during a power failur

  • It's always money (Score:2, Interesting)

    by Anonymous Coward
    I'll tell you exactly how. Some manager somewhere said that it cost too much to add redundancy. It's happened over and over at my extremely large company, and it will continue to happen as long as money is the prime concern.
    • by jsnipy ( 913480 )
      at least now they can quantify thier (bad) descision with thier loss of productivity and perhaps loss of revenue.
    • by rwa2 ( 4391 ) *

      Heh, it shouldn't be about the money, though... they should have specified high availability from the very beginning. They often throw it out during the prototyping stage, saying they need to Keep It Simple Stupid just to get things working, but then all the software is never designed to be able to handle redundancy, and shoehorning it in later becomes pretty much like starting again from scratch.

      Also, designing in redundancy is usually worse than having no redundancy at all if it's never tested. There sh

      • by sjames ( 1099 )

        Not putting all of your eggs in one basket, even a double walled basket with 2 handles and shock absorbers, would be a good start.

    • by Daniel_Staal ( 609844 ) <DStaal@usa.net> on Friday August 27, 2010 @12:14PM (#33393764)

      Add in politics: Get a couple of representatives arguing over where the money (if any) should be spent, and all possibility of real redundancy and fault-tolerance go out the window.

      It's true in larger government organizations than this. The failures just haven't occurred yet.

      • Re:It's always money (Score:4, Interesting)

        by geekoid ( 135745 ) <dadinportlandNO@SPAMyahoo.com> on Friday August 27, 2010 @01:09PM (#33394528) Homepage Journal

        This is a private sector failure. NG is the culprit here, not the government.

        This is why you should be very wary of bidding out work to 3rd party. They don't care about your city. They are not thinking about how their decision impact the city in 10-20-50 years.

        and while infrastructures is far more complex and expensive then people who don't deal with it realize, 2.5 billion of 10 years? 240million a year? That is a price where they should have a tested redundancy system. I single point SAN failure? Shame on NG.

        I hate to burst your preconceive bubble, but in my years in the private sector and public sector as taught me, most government agency are far better at keeping there own infrastructure. More reliable and long standing.

        • Re: (Score:3, Insightful)

          My 'preconceive bubble' is based on my current job for the US government, and the situation we have in our department.

          It might be true on average that government agencies are better at keeping their own infrastructure, especially if they can manage to keep their accounting and design of that infrastructure at a lower level. However, once those decisions pass the level from the internal to the external (or: From those hired for the job, to those elected/appointed into it), that long-term planning appears to

          • by gclef ( 96311 )

            However, once those decisions pass the level from the internal to the external (or: From those hired for the job, to those elected/appointed into it), that long-term planning appears to break down, in favor of political squabbles.

            As someone who's worked both sides of the public/private line, allow me to assure you that this is not unique to government. I've seen plenty of boneheaded design decisions made by upper management for obscure/bizarre/just-plain-wrong reasons in both private and government gigs.

        • by sjames ( 1099 )

          It MAY be a government failure as well. When you write the impossible into a bid, make the bidding process tremendously complex and make the cost of even bidding too high for most potential contractors (by expecting a complex analysis up-front for free) you eliminate all but the largest contractors with a fat legal department. If you then require acceptance of the lowest bid with no allowance for confidence level you set up a perfect storm for a ripoff. You assure that each bid you receive will be a lie bas

    • by joebok ( 457904 )

      Money is always the prime concern for a business. If the cost of adding redundancy is higher than the expected cost of dealing with network failure, then why would a business do it?

      That being said, I often see the cost of dealing with a significant network interruption being underestimated - either the $ cost or the probability of it happening.

      • by cgenman ( 325138 ) on Friday August 27, 2010 @12:49PM (#33394250) Homepage

        Everyone seems to think that a network outage is no big deal, until the network goes down. That's when people start thinking of the burn rate of an entire organization sitting on their thumbs while that network of off-the-shelf Linksys routers is replaced by some kid at Best Buy. Or how that 5k dollars per year for a backup external line suddenly pales in comparison to the 5k dollars per hour your organization is wasting because you were a cheap bastard.

  • Northrup Grumman already runs the U.S. military. Might as well turn over IT to them too.
  • by Nemesisghost ( 1720424 ) on Friday August 27, 2010 @12:07PM (#33393664)
    Maybe they should hire Terry Childs, at least he won't let their network go down for something like this.
  • Redundancy (Score:3, Funny)

    by CmdrPorno ( 115048 ) on Friday August 27, 2010 @12:10PM (#33393708)

    Silly state, expecting to get redundancy for only $2.4 billion dollars. Don't they realize they're going to have to pay a lot more than that to get a reliable network?

  • Awful. (Score:4, Insightful)

    by boneclinkz ( 1284458 ) on Friday August 27, 2010 @12:11PM (#33393716)
    Our primary concern should be a complete audit of World of Warcraft server hardware, to ensure that this vulnerability does not exist in other, more vital networks.
  • by Omega Hacker ( 6676 ) <omegaNO@SPAMomegacs.net> on Friday August 27, 2010 @12:15PM (#33393780)
    I think the id10ts who pulled off this stunt are rather DIMM....
  • Question. (Score:2, Insightful)

    by U8MyData ( 1281010 )
    Umm, so what's the point of having a SAN if it weren't redundant? Me thinks there is more to this story.
    • Re: (Score:3, Insightful)

      Probably involving executives vacationing in nice tropical locales by rewarding themselves with hefty bonuses. Meanwhile some poor IT guys weren't given the budget that reflected how much the State was paying out, and had to cobble together a SAN solution, or pick the cheapest one off the shelf. The IT guys will, of course, be the patsies for this whole episode, with the CEO and CTO all huffing and puffing and vowing to State officials and lawmakers that they're doing everything they can to get to the bot

    • You'd think they'd at least do RAID 1 Mirroring. Then they could just hot swap in another drive, sync it, and be on their merry way. Why centralize your data services if you're not going to do it right?
    • Re:Question. (Score:4, Interesting)

      by Darth_brooks ( 180756 ) * <.clipper377. .at. .gmail.com.> on Friday August 27, 2010 @01:13PM (#33394582) Homepage

      Depends on the SAN. The article (as most tech articles are) is very short on scope & details. So "one chip" went bad. Should that bring everything to a screeching halt? The answer should be "no" but in practice we can all say that it's more often a case of "not usually." From TFA:

      It was hailed as being able to suffer a failure to one part but continue uninterrupted service because standby parts or systems would take over. But when the memory card failed Wednesday, a fallback that attempted to shoulder the load began reporting multiple errors, Nixon said.

      So Array Alpha shits the bed. You follow your failover procedures and start running on Array Zappa. That immediately starts throwing errors. Ok armchair QB's, let me switch to my Keeanu Reeves voice and ask "What do you do?" You built a pretty damned redundant system there and you're still down. Sure, it'd be nice if they had a backup in another DC they could fail to, but they don't. Doesn't matter, eventually you're playing the double / triple / quadruple hulled oil tanker game. Either way, Redundant SAN's aren't cheap and aren't all that easy (it's not exactly a "the bosses nephew who 'knows all about computers' set it up last weekend" level of complexity.) The TFA also has these points:

      Full function may not be restored until Monday.

      Experts who examined the system determined that no data were lost except for those being keyed into the system at the moment it failed, Nixon said.

      Other than the fact that proofreading and the usage of proper grammar are no longer a requirements to work for a Virginia newspaper, what do those points tell us? Sounds to me like they hit the last line in the DR procedures: Restore from backup. Depending on what their backup strategy is (maybe they're splitting several terrabytes across a tape robot that only supports 200/400gig tapes because that robot is the only device the vendor supports.) and how truly important the affected system is (This may be a system where the powers that be said "fsck it, they can process renewals by hand and we'll bring everything back up on Monday after we test on Saturday") a return to business on Monday might be SOP. But that wouldn't sell newspapers (or make talking points with the voters...) now, would it?

      Maybe there was a major screwup here. Maybe they never tested their failovers and maybe that 2nd SAN was bad out of the box. I'm a little more willing to cut some slack and say "man, that sucks. Glad it's not my ass on the line." Karma's a bitch like that. I like to take these stories as an opportunity to rethink my own single points of failure are rather than point & laugh and tell everyone how I'll never lose and data because it's I'm running RAID 5......

    • by geekoid ( 135745 )

      Her is an educated guess:

      When getting the bid, NG promised redundancy.
      NG stalled and then was behind schedule.
      the redundancy system became less 'important' due to time
      NG went live
      NG let a bunch of contractors go
      NG says there in house staff will take care of it.
      NG new hires get stuck at the end of the project, do enough to consider it 'done'. Several amateur mistakes were made.

      What happening right now:
      People who work for the state IT are showing everyone the email the they got from NG saying the system was

  • by BitHive ( 578094 ) on Friday August 27, 2010 @12:27PM (#33393930) Homepage

    Guys, accidents happen. This "Northrop Grumman", whoever they are, will no doubt be fired and not receive any more contracts once word of this gets out. This will put pressure on them to provide better services, or be out-competed by other entrepreneurs. Our free market system works, you just need to expect this kind of thing when it's government doing the hiring.

    • They get all defensive (no pun intended) when you point out that this was one of Mark Warner's crowning achievements as governor......

    • Guys, accidents happen. This "Northrop Grumman", whoever they are, will no doubt be fired and not receive any more contracts once word of this gets out. This will put pressure on them to provide better services, or be out-competed by other entrepreneurs. Our free market system works, you just need to expect this kind of thing when it's government doing the hiring.

      The problem is that it's the government selecting the vendor. If the government would just get out of the vendor-hiring-business maybe the Free Market could fix this mess.

      • The problem is that it's the government selecting the vendor. If the government would just get out of the vendor-hiring-business maybe the Free Market could fix this mess.

        I'm not sure if you're joking or not. If you're serious ... um, who do you suggest should hire people to run the government's servers, other than the government?

  • not say. The F***Ing NDA stops me from saying anything about the stuff I saw in NGC's IT.

    Well, I guess I can say it is BROKE NOW and you have to fix it. Told you so!

    • NDAs are such a bitch.

      I think you should talk to Julian Assange at Wikileaks so that those of us that want the juicy details can get them.

      P.S. Theres a fat unmarked manila envelope in it for you. We all chipped in. Its a really nice envelope.

    • There's a Post Anonymously button for that reason. Given the state of their IT department, I doubt they'll be able to figure out who broke their NDA, even if police manages to give them an IP.

  • My company works on a project that N G lost on a re-compete bid. I can not go much into details, but suffice it to say: I am not at all surprised that they screwed up maintenance and management based on what I have had to deal with on the software they developed.

  • This is what you get for hiring a military contractor to do a civilian persons job. All 2.5 billion gets you in the military is a manger and toilet seat. You don't start getting functional hardware until the budget reaches 100 billion.
  • Anyone know what brand of SAN went down? My company had a similar issue where our SAN had a major outage, and the vendor claimed it was "an error that never happens, we swear".
  • Funny that I should receive an email today inviting me to a Northrop Grumman Information Systems Hiring Event. The event occurs on the 25th of August and I received the email on the afternoon of the 27th. Failed there too!
  • Ok, in this case it probably is the bureaucracy at fault. But it isn't in all cases. In my previous job we had an architect who would take it upon himself to "value engineer" a vendor's solution, with unpredictable results. I'm not sure why -- we had budget. Maybe it was his way of seeming more valuable? This led to "solutions" like a SAN cobbled together from disk arrays, controllers and switches from three different vendors that were not meant to work together, had never been tested in the chosen co

    • Comment removed based on user account deletion
      • > What makes you think Northrop Grumman had a choice? They still work for the state IT department at the end of the day.

        There are typically very high penalties for not meeting your service levels. A 24 hour unplanned outage can blow a half year's profits for the contract. Like any outsourcing company, NG did have a choice -- don't take that contract under those conditions.

    • by pnutjam ( 523990 )
      Hey now! I love to "value engineer", but I would never work back from a series of proprietary solutions.

      And I never have a budget. I've dreamed of having a budget, must be nice.
      I would stroke it and love it and call it George...
  • There's something rotten in the state of Virginia?

  • It happens (Score:4, Informative)

    by kilodelta ( 843627 ) on Friday August 27, 2010 @01:30PM (#33394850) Homepage
    But $2.4 billion over ten years comes out to $240,000,000 per YEAR! With that kind of money they could replace their infrastructure a few times over every year.

    This is a clear example of the malfeasance that happens when government gets corrupted by corporate interests. Taxpayers in VA should be up in arms about this one.

    Here's my story of state agency screw-ups. Two jobs ago I was working for the Secretary of State's office here. We had the opportunity and funding to get our IT infrastructure in order when the Help America Vote Act (HAVA) became law. We were able to build out a secure and redundant room to house our critical infrastructure.

    Physical access by key and alarm code only, Redundant power which included an APS Symmetra UPS system, backed up by a 125kW natural gas fired generator. Even made sure to extend tendrils from the redundant power out to the MDF so the ISP could use our power system. Also had redundant cooling tied to the generator.

    The one Achilles Heel of the operation was DNS. Ours was provided from outside our space.Suggested they build a zone locally that way we'd have DNS services if the state's went down. But they quashed it as being too difficult! Ut si!

    Well one day there's a massive power outage in the city. They were still up and running, lights on, air conditioning on but couldn't get in or out of the internal network even though the ISP circuits were still up. Yup, DNS!
    • by CAIMLAS ( 41445 )

      Are you kidding? THat's a trivial amount to:

      * maintain tens if not hundreds if not thousands of proprietary (legacy) applications
      * maintain the many, many workstations
      * maintain the fabric for many, many workstations
      * maintain the servers which provide services, many of which are interconnected and do not cope with modern technologies well.
      * maintain the storage for all of that
      * SECURE all of the above
      * make it as fault tolerant as possible

      Shit, I suspect the Cisco contract is probably a good 3rd of that pe

  • In my first job, I changed the boot-up message on the VAX to "If only my girlfriend when down as often as this computer!" I kinda assumed it would scroll up off the terminal and nobody would see it. It, uh, didn't. One of our female programmers, who was famous for overreacting, came into work and threw a hissy fit. We fixed the message and decided to tell everyone we couldn't figure out who put it there. This is why you shouldn't give all developers administrator privileges!
  • Whenever I drive thru Virginia (up I-81) there's a sign announcing "Entering Virginia's Technology Corridor" which is followed by hundreds of miles of rolling green pastures.

    What, there was a proliferation of cow-tipping?

  • To anybody who feels incredulous at the notion of a single point of failure taking down a purportedly redundant system:I suspect you have limited experience with the issues and challenges of managing a very large system infrastructure. The complexity of such systems goes well beyond the knowledge of any individual, so notions of fault tolerance across the enterprise are highly theoretical. Even with extensive planning and testing, the gotcha is in what you don't know. Sometimes, one of those What-You-Don't-

  • So tell us a little bit about your education Mr. X.

    "Well, I have a certificate in Microsoft Administration and a computer science degree, plus I am Cisco certfied."

    Oh excellent!!! Thank you for your time.

    So tell us a little bit about your education Mr. Y.

    "Well, I have a degree in computer science and I ran several storage area networks for several years now from my previous employer Widgets are Us"

    But, do you have any certifications?

    "No."

    Thanks Mr. Y. It has been nice speaking with you, don't call us, we

  • Memory go bad in a "san device" (I say in quotes because nobody in their right mind would actually think a singlepathed non-redundant disk array is really san-grand hardware) from a fruit-flavored vendor before, I can actually have some pity for the guys responsible/working on it. Debugging it is a great time too, because your filesystem rebuild generally works. As does copying small amounts of data. It is only once you try to copy a couple terabytes things go to hell.

    Filesystem data and inode corruption bo

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...