Thursday, September 30, 2010

Dell names former Cisco Executive to lead networking , Sep 23, 2010 02:36 pm

Dario Zamarian will be Dell's first VP for networking, possibly signaling an increased focus on the area
by Stephen Lawson

Dell has hired Cisco Systems veteran Dario Zamarian to lead its networking business, naming him as its first vice president dedicated to that division.

Zamarian has joined Dell as vice president and general manager of the networking division, the company announced on Thursday. Zamarian worked at Cisco for six years, most recently as vice president of systems and network management.

Networking has been a fairly small part of Dell's business so far, but the company has become more active in this area as it gears up to compete with the likes of Hewlett-Packard and Cisco in supplying all parts of a data center. The appointment of a vice president to lead networking may signal an accelerated push into this arena. In slightly more than a year, Dell has signed deals to resell network equipment from Juniper Networks, Brocade Communications and wireless LAN vendor Aruba Networks. The company also has a line of homegrown LAN equipment, which carries the PowerConnect brand.

Darren Thomas, who had been leading both the storage and networking businesses at Dell, will continue to run the storage division.

Dell's strength has traditionally been in PCs and servers, but the company has been trying to build up its portfolio in storage and networking, the other two big pieces in the data center puzzle. It failed in a bid to buy high-end storage vendor 3Par earlier this year after HP won a dramatic bidding war with a US$2.4 billion final offer. At least one analyst has speculated that Dell might try to acquire Brocade, a storage networking player that itself got into Ethernet LANs in 2008 by acquiring Foundry Networks.

Networking traditionally has been a commodity business at Dell, but virtualization and cloud computing have made the network a critical piece of a converged data center architecture, said Enterprise Strategy Group analyst Jon Oltsik.

"Now they have to have some network expertise and network management expertise just to build the kinds of bigger data centers that they're going after," Oltsik said. The appointment of Zamarian signals that Dell is acting to make this a reality, he said.

The Dell announcement marked the second time this week that a midlevel executive from Cisco, the dominant enterprise networking vendor, has left to join a rival company in a high-level role. On Monday, Polycom announced that Joseph Burton, former chief technology officer for unified communications at Cisco, had joined Polycom as chief strategy and technology officer.

NOTE : why will these guys leave cisco for any other company for that matter?

Google, others settle with DOJ over no-poaching deals , Sep 24, 2010 06:40 pm

Six companies have agreed not to strike deals that prevent each other from hiring away valuable employees
by Nancy Gohring

Google, Apple and several other companies have reached a settlement with the U.S. Department of Justice over charges that they agreed not to hire away high-profile workers from each other.

If approved by the court, the proposed settlement will conclude an investigation that started in the middle of last year. The DOJ says the companies acted anticompetitively by agreeing not to cold call each others' employees to offer them jobs.

The deals were between Apple and Google, Apple and Adobe, Apple and Pixar, Google and Intel, and Google and Intuit, according to the DOJ. The first such agreement was made in 2005 between Apple and Adobe.

The DOJ filed a civil lawsuit on Friday in the U.S. District Court for the District of Columbia and simultaneously filed a proposed settlement.

The DOJ said the no-solicitation agreements eliminated a significant form of competition to attract highly skilled employees and deprived employees of the chance at better jobs.

It also said senior executives at the companies actively managed the deals. For instance, Apple and Intuit each complained to Google that it had violated agreements between the companies, and Google investigated the incidents, the DOJ said. Each time, Google found it had not violated its agreements.

The suit implies that Adobe was bullied into its deal with Apple. Apple approached Adobe about agreeing not to cold call each other's employees, according to the DOJ. "Faced with the likelihood that refusing would result in retaliation and significant competition for its employees, Adobe agreed," the suit says.

As part of the settlement, the companies have agreed not to ban cold calling and not to enter, maintain, or enforce any kind of agreement that prevents competition for employees. The deal, which still must be approved by the court, would be in effect for five years and require the companies to take compliance steps to ensure they stick to it.

Google said it made the agreements not to cold call employees at Apple, Intel and Intuit in order to maintain a good relationship with the companies.

"Our policy only impacted cold calling, and we continued to recruit from these companies through LinkedIn, job fairs, employee referrals, or when candidates approached Google directly," Amy Lambert, Google associate general counsel, wrote in a blog post.

"While there's no evidence that our policy hindered hiring or affected wages, we abandoned our 'no cold calling' policy in late 2009 once the Justice Department raised concerns, and are happy to continue with this approach as part of this settlement."

In a statement, Intel denied any wrongdoing. "Intel does not believe its actions violated the law nor does the company agree with the allegations," it said. "The company is settling the matter because it believes it would not harm the company nor its ability to do business."

Adobe and Apple did not immediately reply to requests for comment.

NOTE : let there be competition, it's good for the employees.

Experts say Stuxnet worm could be state-sponsored

by Larry Magid

Could worms like Stuxnet threaten nuclear plants?

The Stuxnet computer worm that may have been designed to attack a nuclear facility in Iran could have been state sponsored, according to two security experts with whom I spoke.

"We can tell by the code that it's very, very complex to the degree that this type of code had to be done, for example, by a state and not, for example, some hacker sitting in his parents basement," said Symantec security researcher Eric Chien. Chien added, however, that "there's nothing in the code that points to the particular author" or "what their motivation is."

TrendMicro security researcher Paul Ferguson agrees that Stuxnet was likely state-sponsored. "The amount of technical expertise that went into this doesn't appear to have been by some random lone individual person because they would have had to have access to these systems to develop this."

Not necessarily aimed at Iran nuke
Ferguson could not confirm that the target was an Iranian nuclear plant. "That is purely speculation at this point, there have been lots of theories as to what the target was." He said it could also have been aimed at oil and gas facilities or other installations that use Siemens control systems, which were specifically attacked, he said.

Serious threat
Both Chien and Ferguson said this type of code is a major security concern. "For the broader population, this is definitely a new generation of attack. We're not talking any more about someone stealing someone's credit card numbers, what we're talking about is someone being able to, for example, cause a pipeline to blow up or cause a nuclear centrifuge to go out of control or cause power stations to go down. So we're not taking about virtual or 'cyber' sort of implications here, what we're talking about are real life implications."

Ferguson said "it is a big deal because the utility companies, and manufacturing communities and the power companies and gas and oil companies for years have been using closed propriety systems to manage their infrastructure and over the course of the past few years they've been making business decisions to use off-the-shelf software like Windows." He added that now we're seeing the same threat as with other networks as facilitates are connected to the Internet or allow access to thumb drives. This type of threat, according to Ferguson, is "absolutely new and that's why a lot of people in the intelligence community, in the Department of Homeland Security and different governments around the world are really kind of spooked by this development. It shows the targeted nature and sophistication of the criminal/espionage aspect to this."

Larry Magid is a technology journalist and an Internet safety advocate,follow him on Twitter @larrymagid.

NOTE: a state sponsored worm, what else do you expect from the IT age?

Weak server chip shipments on the horizon, analysts say Sep 24, 2010 02:34 pm

Computer makers may order fewer chips on weaker server demand
by Agam Shah

The server market is showing signs of weakening, which could lead to a downturn in server chip shipments, further affecting the revenue of chip makers already hampered by lackluster demand for home PCs, financial analysts said Friday.

"Near term, we believe weak PC trends and server-related customer consolidation could lead to weak server orders because of uncertainty in current server suppliers' existing and future product road maps," said Apurva Patel, a financial analyst at Ticonderoga Securities, in a research note sent on Friday morning.

Top PC and chip makers so far have highlighted weakness in the home PC market, but not the server market. Second-quarter server revenue grew at Dell and Hewlett-Packard as customers refreshed IT infrastructures after delaying purchases during the recession.

But expected slow economic growth through the second half of the year could damage server chip shipments and revenue, said Dean McCarron, principal analyst at Mercury Research. Chip makers could be affected by server makers tightening up inventory in anticipation of weak server demand.

"At this point we have no confirmation that server numbers will be off, but it's a reasonable expectation that things will be soft," McCarron said. "If the OEMs tighten inventory, it reduces demand of [chips] from the chip manufacturer," McCarron said.

In response to reduced demand from server makers, chip makers may also try to cut down on inventory, McCarron said.

Intel and AMD have both lowered revenue expectations for the third fiscal quarter on weak PC demand in mature markets. AMD on Thursday lowered its revenue forecast for the third quarter, citing weak demand for laptops in Western Europe and North America. Both companies report their third-quarter earnings in October.

AMD is having trouble selling its 12-core Magny-Cours chip and has failed to make a dent into Intel's offerings, said financial analysts from the FBR Capital Markets in a research note.

"AMD's product offering doesn't seem competitive versus Intel; AMD's Magny-Cours server product is seeing limited upside," the analyst firm wrote.

AMD is due to release new 8- and 16-core chips based on the Bulldozer architecture, which could reach servers early next year. Demand for AMD server chips could see an uptick when chips based on the new architecture ship, analysts said.

NOTE : AMD's race to keep with intel, dont you find that really interesting?

Windows Phone 7 won't support tethering September 24, 2010 11:16 AM PDT

by Ina Fried

Microsoft's forthcoming Windows Phone 7 won't act as a hot spot, after all.

A Microsoft official recently suggested that it would be up to cellular carriers whether to allow so-called "tethering," but Microsoft confirmed on Friday that Windows Phone 7 doesn't support the feature at all. Microsoft won't say if future versions might allow for tethering. Despite some comments to the contrary, Windows Phone 7 won't support tethering when it goes on sale next month.

It's another technical limitation for Microsoft's re-entry into the phone market, adding to a list of features supported on other platforms including full multitasking and copy/paste functionality. The phones will not be available until next year for CDMA carriers such as Sprint and Verizon.

Obviously, fewer carriers is a blow, but it remains to be seen how large the impact will be from the other missing technical features. Tethering one's phone to allow computers Internet access is a powerful, but niche use for the phone. Android and Palm support it. The iPhone has supported tethering for some time, though AT&T only recently added the capability for U.S. iPhone owners. AT&T added tethering as a $20-a-month option in June, at the same time it eliminated the unlimited data option.

As for its other limitations, copy and paste are features I use all the time--and ones that Apple was lambasted for not having in its early days. Windows Phone allows for limited multitasking, such as playing music and running an application or using the calendar while talking on the phone, but the operating system doesn't allow the full multitasking found on other smartphones.

There is no doubt that those looking for the most technically powerful device will have reason to pause before picking Windows Phone 7. However--and this is the unusual thing for Microsoft--the early adopter isn't really the target customer for Windows Phone 7. The company appears to be aiming for the masses and may have hit its mark, even if it left some useful features on the chopping block.

Having used the phone as my everyday device for the last couple of months, it has proven elegant and reliable, if lacking in the aforementioned technical areas. I'm very curious to see what mainstream consumers make of the phones when they hit the market.

And we won't have to wait much longer. After years of development, Microsoft wrapped up development of Windows Phone 7 at the beginning of the month and plans to tout it at an October 11 event in New York. Devices are set to go on sale, at least in Europe, later that month, with U.S. availability likely by early November.

NOTE: less features more customers, how does that work?

What cloud computing can and can't do

Kill enterprise architecture, provide infinite scalability, cost pennies per day -- these are just a few of our overblown expectations for the cloud

A recent post by Deloitte asked whether cloud computing makes enterprise architecture irrelevant: "With less reliance on massive, monolithic enterprise solutions, it's tempting to think that the hard work of creating a sustainable enterprise architecture (EA) is also behind us. So, as many companies make the move to cloud computing, they anticipate leaving behind a lot of the headaches of enterprise architecture."

In short, we make a lot of money from consulting on enterprise architecture, so please don't take my enterprise architecture away. It's analogous to saying that some revolutionary new building material makes structurally engineering irrelevant. Even if that were the case, I still wouldn't go into that building.

I'm disturbed that the question is being asked at all. We should've evolved a bit by now, considering the amount of time cloud computing has been on the scene. However, silly questions such as this will continue to come up as we oversell the cloud; as a consequence of these inflated claims, I expect we'll be underdelivering pretty soon.

Cloud computing does not replace enterprise architecture. It does not provide "infinite scalability," it does not "cost pennies a day," you can't "get there in an hour" -- it won't iron my shirts either. It's exciting technology that holds the promise of providing more effective, efficient, and elastic computing platforms, but we're taking this hype to silly levels these days, and my core concern is that the cloud may not be able to meet these overblown expectations.

It's not politically correct to push back on cloud computing these days, so those who have concerns about the cloud are keeping their opinions to themselves. A bit of healthy skepticism is a good thing during technology transitions, considering that many hard questions are often not being asked.

David Linthicum's

NOTE : dont you think cloud computing has come to stay?

Yahoo keeps data center efficiency secrets to itself


Search giant boasts extraordinarily low 1.08 PUE for new Lockport facility -- but is mum on how it got there
By Ted Samson

The data center industry has embraced green goals in a big way over the past couple of years, and companies of all shapes and sizes are vying to crank out the most energy-efficient facilities with the lowest PUE (Power Utilization Effectiveness) rating. Plenty of companies have not only trumpeted their success in attaining admirable PUE scores, many are refreshingly -- and surprisingly -- open in sharing their secrets with the rest of the industry.

Yahoo is a notable exception to the trend -- at least in part. The company appears to have a handle on wringing a high levels of per-watt performance from its data centers. Just this week, the company announced a new data center facility in Lockport, N.Y., with a jaw-dropping PUE of 1.08. In comparison, the industry average is 1.92 according to the EPA; the lowest reported PUE on record that I've seen, prior to Yahoo's announcement, is 1.11 from Google, but its average among all its datacenter is 1.23.

Where Yahoo parts ways with other green data center leaders is in its reticence to disclose its homegrown best practices. This would be easy to forgive were it not for the company's stated commitment to environmental leadership and its claim that it has "been sharing best practices to encourage the entire industry to put smarter policies in play."

In promoting its Lockport facility, here's what Yahoo has disclosed: The company has developed a modular data center called the Yahoo Compute Coop (YCC), named so because it resembles a chicken coop, a design that promotes better airflow. The facility uses no mechanical cooling whatsoever, which means no electricity goes toward spinning fans to push cold air up through the raised floor or down from the ceiling. Rather, the facility relies entirely on outside air, supplemented by evaporative cooling. Servers are lined up in hot aisle/cold aisle formation to prevent air mixing.

To Yahoo's credit, the company has done a fine job in building an environmentally friendly data center that uses far fewer natural resources than average. It even runs on hydropower delivered by a NYPA utility. The design earned Yahoo a hefty $9.9 million chunk of the U.S. Department of Energy's Green IT grant program.

But aside from the chicken coop design, Yahoo has not disclosed anything especially new or innovative here that sheds any light on how it's managed such a high level of energy efficiency. The data center industry is well aware of the greens merit of modularity, free cooling, and hot aisle/cold aisle containment. Plenty of companies practice those techniques, yet none has achieved a PUE of 1.08 (or if they have, they aren't saying so).

NOTE : why is yahoo so secretive? Do they think that is why they are competing well in the market?

Wednesday, September 29, 2010

Orange looks set to use Cisco, EMC, VMware in cloud , Sep 24, 2010 07:02 pm

Orange Business Services is planning an audio conference with the three-way data-center partnership
by Stephen Lawson

Orange Business Services appears poised to become a cloud service provider using pre-integrated "VBlocks" from Cisco Systems, EMC and VMware.

Orange will join those companies for an audio news conference on Monday morning, according to a media advisory released Friday. Cisco, EMC and VMware are partners in the Virtual Computing Environment coalition, which was formed in November 2009. The group was formed to combine networking, storage, computing and virtualization components in prepackaged "VBlocks" for constructing data centers. They also formed a joint venture, called Acadia, to help customers and system integrators build VBlocks.

Orange Business Services, a division of the multinational carrier Orange, already sells a variety of services to enterprises worldwide. Depending on the country, those include voice, video, unified communications, managed services, project management and security. Last month, Ovum analyst Peter Hall said Orange Business Services, AT&T and BT were in position to compete in cloud computing with the major cloud players from the IT industry.
He predicted that large global and regional carriers would become major providers of services such as infrastructure as a service (IaaS) and software as a service (SaaS). Hall will participate in the audio conference on Monday.

Other participants will include Orange Business Services CEO Vivek Badrinath, Cisco Executive Vice President Rob Lloyd, EMC cloud services chief Howard Elias and Carl Eschenbach, executive vice president of worldwide field operations at VMware, according to a registration page for the event.

Just last week, the VCE partners announced that Singapore carrier SingTel would use their products to offer hosted computing services to enterprise customers before the end of this year. The group said SingTel was its first Asian customer.

As virtualization brings computing, storage and networks together into one pool of IT resources, which can be delivered as a computing "cloud," enterprise IT vendors have been jockeying to create product lineups that span all those categories. Going up against the likes of Hewlett-Packard, IBM and Oracle, the Virtual Computing Environment group is attempting to tackle the market through partnership.

NOTE : what won't these big shots do to move the market?

Comcast hackers get 18 months in prison , Sep 24, 2010 07:58 pm

They redirected Comcast Web traffic in a May 2008 incident
by Robert McMillan

Two hackers convicted of defacing Comcast's website two years ago were sentenced Friday to 18 months in prison.

Christopher Lewis, 20, and Michael Nebel, 28, were part of a telephone hacking group called Kryogeniks that took control of the Comcast.net website in May 2008.

After taking over an account used to manage Comcast's Domain Name System information, they redirected visitors to their own website for several hours. Comcast.net drew about 5 million visitors per day at the time.

During the incident, visitors who went to the site were greeted with the message "KRYOGENICS Defiant and EBK RoXed Comcast. sHouTz to VIRUS Warlock elul21 coll1er seven."

The two men were sentenced Friday by Judge Robert Kelly in U.S. District Court for the Eastern District of Pennsylvania. Lewis, also known as EBK, and Nebel, a.k.a. Slacker, must also pay almost US$90,000 in restitution to Comcast.

A third hacker, James Black, who also goes by the name Defiant, was sentenced last month to four months in prison.

The attack was one of several in recent years that have shown how weak controls over corporate DNS accounts can lead to major problems. Earlier this year, Chinese search giant Baidu was taken offline in a similar hack.

In the case of Comcast, the hackers used social engineering techniques to trick an employee into giving them information that helped them access Comcast's DNS account with Network Solutions.

According to Black's plea agreement in March this year, the Kryogeniks crew gained administrative access to the Network Solutions account on May 27, 2008, and redirected Comcast traffic to their own website.

It cost Comcast about $90,000 to recover from the attack.

Robert McMillan covers computer security and general technology breaking news.Follow Robert on Twitter at @bobmcmillan.

NOTE : are these hackers good or what?

Tuesday, September 28, 2010

Seabird concept phone designer talks about need for better interfaces , Sep 25, 2010 06:13 am

Designer Billy May, 25, is looking for work
by Matt Hamblen

Designer Billy May's concept smartphone, the Seabird, might never be produced. But the underlying concepts in the phone -- and inside May's head-- are a wonder, nonetheless.

"Anybody expecting full fruition of Seabird will be disappointed, I think," May said on Friday, a day after the concept phone appeared on Mozilla Labs' Concept Series Web site .

The concept phone was developed over the last year with heavy input from the Mozilla online
community, with an emphasis on open source for software and hardware elements, May said. It would run Android , which is also open source.

Even so, Mozilla has no plans to produce it.

"I've talked to designers of concept phones who didn't get them built, but a few said they saw their work come out as a phone in China," May said. "So check in China in a couple of years."

The most striking feature of the smartphone is dual pico projectors, one on either side. When the phone is docked, one projector projects a full-size virtual keyboard and the other projects the phone's display on a nearby wall for easier viewing.

May said the idea for the projectors derived from his work in lighting and visualization design as well as the strong sentiment from Mozilla contributors, who believe the interface on phones needs to be made easier to use.

"What grabbed me more than anything was working with the limitations of the interface [in today's phones]," May said. "It was intriguing to see the projectors distort light outwards. I love seeing function play out."

"I like the medium of light, so you see my own affinities coming out in Seabird," he continued. "Projectors are so malleable, depending on the orientation and context and surroundings, so you can create all sorts of interfaces
depending on where you are."

Designer Billy May's Seabird concept smartphone.

In general, May said today's phones have done "a wonderful job of delivering on metrics we'd define success by" such as 1GHz and soon-to-come 1.2 GHz processors and many-megapixel displays. "The next step is to expand on the interface," he said.

"We've been pushing on processor speeds until the cows come home, which won't drive happiness as much anymore and offer diminishing returns," he said.

May almost sounds like a veteran phone designer, but actually never designed one before. In fact, he is 25 and is "currently open to new opportunities," according to his Web site , which he explained means that he is looking for a full time job in New York as a product designer.

NOTE: don't you just feel for the guy?

Panasonic unveils LUMIX GH2 w/3D interchangeable lens. Wed Sep 22


This is one amazing camera, Panasonic LUMIX DMC-GH2 is the latest addition to the LUMIX G Micro system. The primary advantage of the Micro Four Thirds system is a compact camera body, full of next-generation features.

Panasonic calls this their most master-level DSLM yet, and it can even capture full HD 1920 x 1080 videos in smooth 60i, doubling up the sensor output from 25p/24p to 50p/60p. The camera also backups 1080/24p native mode at 24 Megabits per second, the highest in AVCHD format, along with Touch Auto Focus, an HDMI output, 16MP sensor, Venus Engine FHD processor, 3D LCD, and Live View at 60 frames-per-second.

Panasonic also announced three new lenses: the $400 Lumix G 14 mm / F2.5, which they’re touting as the world’s lightest interchangeable single focal length lens; the $600 telephoto Lumix G Vario 100-300mm / F4.0-5.6 / Mega O.I.S.; and, wait for it, the $250 LUMIX G 12.5mm / F12, the world’s first interchangeable 3D lens.

The GH2 trusts on a refined image processing engine and sensor — along with the ability to take 3D photographs with Panasonic’s new exchangeable 3D lens — in order to stand out from the legions of other mirrorlike options that are suddenly breathing down the neck of MFT. Oh, and if you’re not feeling that 1080/60i video mode, it’ll also record 1080/24p at 24Mbps (the highest in AVCHD format), and the HDMI output ensures that no quality is lost when showing life’s most wonderful memories on your pop’s HDTV.

NOTE: you agree that panasonic killed it on this one.

Sony's Google TV product slated for October 12 NYC unveiling. sept 24,2010

Sony has just dispatched an invitation for a New York City media event slated for October 12 that promises the introduction of "the world's first Internet Television." Sony's "Internet Television" is one of the first home video products that will include built-in support for Google TV, the new Web video service from the search giant that promises to integrate Google search and any Web-based Flash video directly into the TV.

The product was first announced at the Google I/O conference in May, and subsequently at the IFA show in Germany earlier this month. In addition to the Sony TV, there have been persistent rumors of a Google TV-enabled Sony Blu-ray player as well. The Sony products will compete with the Google TV-powered Logitech Revue set-top box, which is also scheduled to be released this fall.

For Sony and Google, it's certainly a case of the sooner, the better. These streaming TV products will be going head-to-head with the already refreshed Roku line, the soon-to-be-released Apple TV update, and the much-anticipated Boxee Box--not to mention Sony's own SMP-N100.


John P. Falcone covers home theater and network entertainment products




Monday, September 27, 2010

Facebook outage caused by database glitch , 24 sept 2010

Facebook's second outage of the week, caused by an error in database logic, underscores the need for effective testing and change control procedures
By Tony Bradley

Facebook went offline for the second time in two days yesterday. The Thursday outage -- which lasted more than two hours for some users -- is a tale of a database control gone awry and illustrates the need for effective testing and change control procedures.

According to a blog post from Facebook describing the details of the issue, "The key flaw that caused this outage to be so severe was an unfortunate handling of an error condition. An automated system for verifying configuration values ended up causing much more damage than it fixed."

That is only half the story, though. The database glitch was triggered by a change implemented to a configuration value. The database error handling is supposed to detect when a configuration value is invalid, and update it with a designated configuration value. However, the new designated configuration value implemented by Facebook was also seen as invalid, causing an endless loop.

Facebook explains, "To make matters worse, every time a client got an error attempting to query one of the databases it interpreted it as an invalid value, and deleted the corresponding cache key. This meant that even after the original problem had been fixed, the stream of queries continued. As long as the databases failed to service some of the requests, they were causing even more requests to themselves. We had entered a feedback loop that didn't allow the databases to recover."

Ultimately, Facebook was forced to shut the site down and take the affected database cluster offline to break the loop. It eventually allowed users back onto the site, but disabled the configuration error correction system that sparked the problem while it investigates new solutions to prevent this from occurring again in the future.

Like the Twitter cross-site scripting worm incident earlier this week, the Facebook outage holds some lessons for IT admins. The Twitter worm exploited a vulnerability that Twitter had already identified and patched, but inadvertently exposed again with a subsequent website update.

The Facebook outage was caused by implementing a configuration value on the live website without proper testing and validation. Had Facebook tested the new configuration value in a lab environment designed to mirror the real-world database cluster, it should have identified the problem with the new configuration value, and the error loop that caused this problem before allowing it to take the entire Facebook site offline.

Your website may not have half a billion users spending more time on it than any other destination on the Web like Facebook, but there are users, partners, and customers that rely on it nonetheless. Make sure you follow secure coding practices, and follow solid patch management and change control procedures to detect and resolve issues like this proactively before they take your site down.

Sunday, September 26, 2010

Facebook suffers second outage in as many days Sep 23, 2010 04:35 pm |

Users take to Twitter to vent frustrations
by Sharon Gaudin

Facebook was struggling today as its popular social networking site went offline for at least 45 minutes Thursday afternoon. It's the second day in a row the popular site has had problems.

As of 3:50 p.m. ET today, the site was back up for some users, but still down for others. A "DNS Failure" message popped up when users tried to access the site.

AlertSite, a Web performance management company, reported that Facebook went down around 2:30 p.m. ET. And between 2:30 p.m. and 3:30 p.m., the site only had 38.46% availability. AlertSite also reported that Facebook was down at all 12 of its monitoring locations throughout the U.S.

With half a billion users worldwide, many of them admittedly "addicted" to Facebook , any downtime immediately stirs up a lot of buzz online.

As usual, a lot of users took to Twitter to voice their frustrations. "Facebook is down. In other news, office productivity is up across America," tweeted "MattMooreSC."

The outage occured one day after news leaked that Facebook CEO Mark Zuckerberg is donating $100 million to the struggling Newark, N.J. school system. That donation comes just ahead of the Oct. 1 release of The Social Network movie, which chronicles the creation of Facebook and reportedly doesn't always paint Zuckerberg in the best light.

"It seems Facebook is down and Zuckerberg won't bring it back up unless we promise not to see the movie," was how one Twitter user, "Someecards," put it in a tweet.

It's unclear whether the problem today was related to Facebook's problems on Wednesday, when a third-party networking provider took the site down for some users.

Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips.Follow Sharon on Twitter at @sgaudin.