The Group was called the Open Foundation of networking, hopes to standardize a set of technology pioneer at Stanford and the University of California, Berkeley and intended to make small and large programmable networks in the same way as individual computers are.
The amendments, if widely adopted, would have implications for telecommunications and enterprise data centers large networks, but also for small home networks. The advantages, proponents say, would be more flexible and secure networks that are less likely to suffer from congestion. One day, they say, networks might be cheaper to build and operate.
The new approach could allow for the establishment at the request "express lanes" for voice and data traffic is sensitive to both. Or he could let the big telecommunications companies, such as Verizon or AT & T, use the software to combine multiple optical fiber backbones temporarily for information particularly heavy loads and then it automatically separate when one hour of peak data.
For households, the new capabilities could let Internet service providers provide services to remote home control security or energy.
The organizers of the Foundation also said new technologies offer ways to improve the security of the computer and could potentially improve privacy in e-commerce and social networking markets. These markets are the uses the fastest growing network and computing resources.
While the new capabilities could be crucial for network engineers, consumers and business users the changes may not be more noticeable than the progress of the plumbing, heating and air conditioning. Everything that might better function, but most users cannot possibly step - or care - why or how.
The foundation of open networks include Broadcom, Brocade, Ciena, Cisco, Citrix, Dell, Deutsche Telekom, Ericsson, Facebook, Force10, Google, Hewlett-Packard, typewriters, Juniper, Marvell, Microsoft, NEC, Netgear, NTT, Riverbed Technology, Verizon, VMWare and Yahoo.
"This answers a question that the entire industry has had, and this is how you provide owners and operators of large networks with the flexibility of the control they want to build in a standardised manner," said Nick McKeown, Professor of electrical and computer engineering at Stanfordwhere his and colleagues at work is part of the technical foundations, called OpenFlow.
The effort is a departure from the traditional way, the works of the Internet. Designed by military and academic experts in the 1960s, the Internet was based on interconnected computers, send and receive data packets, paying little attention to the content and make a few distinctions among the different types of receptors of information and shippers.
Intelligence in the original Internet was supposed to reside in large part on the endpoints of the network - computers - while specialized routing computers were relatively dumb post offices of different size, mainly confined to the reading of addresses and the transfer of packets of data to the adjacent systems.
But these days, when the cloud computing means many information are stored and processed on computers on the network, there is a growing need for control systems smarter to orchestrate the behavior of thousands of routing machines. It will allow, for example, for managers of large networks to their network to give priority to certain types of data, perhaps to ensure the quality of the service or to add security to parts of a network of program.
Designers argue that, because OpenFlow should open hardware and software systems that control the flow of Internet packets of data, systems which have been closed and exclusive, it will produce a new cycle of innovation focused primarily on large computer systems known as cloud computers.
"It is a pragmatic solution," said David Farber, a computer scientist at Carnegie Mellon, who was one of the pioneers of the technology of data networking.
"The idea of mobile intelligence for network termination points was one of the original design of the Internet," said Mr. Farber. But he noted that as the advanced network to sophisticated advanced offer services through computers in centralized cloud, including the delivery of digital and video telephony, it became less possible to continue in relying on the design of decentralized network.
Mr. Farber noted that there were other research projects aimed at redesigning the Internet. For example, the National Science Foundation, support the initiative of OpenFlow, has funded the global environment for network Innovations, or GENI. Open flow seems to have generated support large industry, he said, but he has yet to prove itself on the market.
A number of companies of networking, including Cisco, Hewlett-Packard, and Juniper have already produced prototype systems that support the OpenFlow technology. In addition, at least a start-up in Silicon Valley, Nicira Networks, is currently testing the products of OpenFlow are supposed to enter the cloud computing market later this year.
"If you look really large companies such as Google and Amazon, they take really smart programmers and give them a problem such as research or the automation of storage and then turn the crank and occasional pops a large system", said Martin Casado, from Nicira Chief Technologist and a member of the research project at Stanford OpenFlow.
But Mr. Casado noted that, in the past, the part of the system that cannot be programmed was the network. "This customizes the network for the applications that are actually being implemented."
0 komentar:
Posting Komentar