0 Σχόλια
0 Μοιράστηκε
99 Views
Κατάλογος
Κατάλογος
-
Παρακαλούμε συνδέσου στην Κοινότητά μας για να δηλώσεις τι σου αρέσει, να σχολιάσεις και να μοιραστείς με τους φίλους σου!
-
ENTAGMA.COMFree Course: Introduction into APEX – Ep.01Free Course: Introduction into APEX – Ep.01 by Anthea Eichstetter 14.04.2025 comment 0 Free Course We are very excited to have Magnus Møller – Co-founder of Tumblehead Animation – as your teacher for this APEX beginner course. In this 3-part series, Magnus will introduce you into the world of Character Animation with Houdini! You will learn how and why to use APEX and how it differs from SOPs, how to rig a Character and finally you will also get an introduction into APEX script. We hope this series is helpful to you. If you have any requests or feedback, please leave them in the comments below. We would love to have Magnus back for future APEX and Animation courses. Your feedback and support are the best way we can achieve this! Liked it? Take a second to support Anthea Eichstetter on Patreon!0 Σχόλια 0 Μοιράστηκε 90 Views
-
3DPRINTINGINDUSTRY.COMALTANA Brings Cubic Ink Resin Production Closer to U.S. CustomersGerman chemical group ALTANA has ramped up production of its Cubic Ink UV-curing resins for industrial additive manufacturing, marking a major step toward localized manufacturing and distribution in the United States. By producing domestically, ALTANA also aims to enhance supply chain reliability and reduce delivery lead times for its U.S.-based customers. The first large-scale batch of a UV-curable 3D printing resin from Cubic Ink was produced in collaboration with ALTANA’s ACTEGA division at its Cinnaminson facility and is now headed to a medical technology customer in the U.S. West Coast. “Our customer proximity was crucial to the successful implementation of the project. We are on site and understand the challenges of our customers. This enables us to grow together and quickly develop individual product solutions. This is especially true for innovative technologies such as 3D printing,” said Dr. Max Röttger, Head of Cubic Ink. The move reinforces ALTANA’s commitment to scaling industrial-grade additive manufacturing, backed by robust production capacity, advanced technologies, and rigorous quality assurance. ALTANA Cubic Ink Scaling Up Production. Photo via ALTANA . High-Performance Materials for Open 3D Printing Platforms The Cubic Ink resin portfolio is engineered for compatibility with a wide range of open 3D printing systems, including DLP, LCD, and SLA technologies. Optimized for end-use applications, these resins offer properties such as chemical resistance, durability, and aging stability. Their low viscosity supports real-time, cost-efficient processing, while customizable formulations can be fine-tuned for specific machines and operational requirements. Cubic Ink also offers specialized inks for material jetting. This broader materials strategy supports a wider array of applications across industries with stringent performance demands—including automotive, aerospace, and healthcare fields such as audiology, dentistry, and orthopedics. ALTANA Cubic Ink – Materials for Additive Manufacturing. Photo via ALTANA. Track Record of Innovation and Industry Collaboration ALTANA’s current scale-up effort builds on a series of strategic partnerships and product expansions. In 2024, ALTANA’s Cubic Ink division teamed up with 3D printing firm Quantica to develop advanced materials for 2D and 3D inkjet printing. The collaboration introduced starter resins for Quantica’s NovoJet OPEN system and focused on high-viscosity formulations to extend application possibilities. In 2023, ALTANA expanded its Cubic Ink portfolio to include new materials for DLP, LCD, SLA, and jetting systems, targeting end-use components in high-demand sectors. Highlights from the 2023 portfolio included Cubic Ink High Performance 2-1400 VP for SLA, along with other specialized materials like Mold 210 VP, 601 VP, and ESD-safe High Performance 4-2800 VP-ESD. These products were showcased at Formnext 2023 in Frankfurt, signaling ALTANA’s commitment to open-system, industrial-scale AM solutions. Who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news. You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows ALTANA Cubic Ink Scaling Up Production. Photo via ALTANA.0 Σχόλια 0 Μοιράστηκε 104 Views
-
WWW.COMPUTERWEEKLY.COMGovernment injects extra funding to drive quantum growthBartek Wróblewski - stock.adobe News Government injects extra funding to drive quantum growth The UK government has ploughed an extra £121m into quantum to drive development of the technology By Cliff Saran, Managing Editor Published: 14 Apr 2025 12:00 The government has committed £121m of funding over the next 12 months to support quantum computing in the UK. While quantum computing remains a nascent technology, it promises to revolutionise research and development, and power computational tasks that cannot be performed on today’s most advanced supercomputer, paving the way to significant economic benefits in countries that can harness the technology effectively. The UK’s National Quantum Technologies Programme sets out the government’s long-term effort to back early-stage research, and support getting quantum technologies out of the lab and onto the marketplace. According to data from professional services firm Qureca, China has made the largest investment in quantum technology, which it estimates is worth $15bn, followed by the US ($7.7bn). The UK’s investment in quantum technology ($4.3bn) is ahead of both Germany ($3.3bn) and France ($2.2bn), according to Qureca’s data, which demonstrates the government’s continued funding and its big bet on this emerging technology sector. Coinciding with World Quantum Day, the Department of Science, Innovation and Technology said the funding is being made available over the next year to expand the use of the technology and secure the UK’s position as a world-leader in quantum as part of the government’s long-term commitment to the sector. Secretary of State for science and technology Peter Kyle said the UK is home to the second-largest community of quantum businesses in the world. The funding is set to help support the development of new quantum tools and products, and aligns with the government’s Plan for Change. “Quantum has the potential to save millions for our economy, create thousands of jobs and improve businesses across the country – stopping fraudsters in their tracks, protecting our bank accounts and more,” he said. “Backing our world-class quantum researchers and businesses is an important part of our Plan for Change.” Read more about quantum computing UK government invests £106m in five quantum tech hubs: Five university hubs are receiving funding to support the development of quantum applications that can support healthcare and businesses. Research team demonstrates certified quantum randomness: 56-qubit trapped ion quantum computer from Quantinuum has demonstrated quantum supremacy as a random number generator. The government hopes the funding will help to further the development and deployment of quantum systems, which could be used to power improved healthcare systems, boost energy efficiency in the grid, and help tackle fraud and money laundering. For instance, Quantum experts at HSBC have been working with the National Quantum Computing Centre to research how quantum computing could be used to identify the indicators of anti-money laundering. The funding is being split across a number of areas: £46.1m through Innovate UK will be used to accelerate the deployment of quantum technology across a range of sectors, including computing; networking; position, navigation and timing; and sensing. The National Quantum Computing Centre will receive £21.1m to further its quantum testbed programme with Innovate UK and support the Quantum Software Lab. The National Physical Laboratory will receive £10.9m for its quantum measurement programme. In July 2024, the government announced five new quantum hubs in Glasgow, Edinburgh, Birmingham, Oxford and London to bring together researchers and businesses. As part of this research programme, the Engineering and Physical Sciences Research Council will receive £24.6m in funding, which includes an investment of £3m in training and skills programmes. Other programmes receiving funding include 11 Quantum Technology Career Acceleration Fellowships, which are being awarded £15.1m and £4.3m from the Science and Technology Facilities Council to back early-career researchers and Quantum-enabled apprenticeships. In The Current Issue: Interview: The role of IT innovation at Royal Ballet and Opera ‘Bankenstein’ and a cold calculation means banking crashes will continue Download Current Issue Sysdig: A new arms race on the evolving battlefield of cloud security – CW Developer Network No Complaints About Automating Compliance – Networks Generation View All Blogs0 Σχόλια 0 Μοιράστηκε 83 Views
-
WWW.ZDNET.COMSamsung's new rugged phone and tablet are built to last - but still have AI smartsBalancing brains and brawn, the XCover7 Pro is Samsung's latest rugged phone, now housing a brighter display and Google's AI software0 Σχόλια 0 Μοιράστηκε 84 Views
-
WWW.FORBES.COMWhy They Changed Abby In ‘The Last Of Us’ Season 2One thing that has come up already is how The Last of Us has handled Abby in two different ways. One appearance-wise, the other story-wise.0 Σχόλια 0 Μοιράστηκε 86 Views
-
WWW.TECHSPOT.COMTrump plans new tariffs on semiconductors, promises flexibility for some companiesWhat just happened? On Sunday, President Donald Trump revealed to reporters aboard Air Force One that he plans to announce a tariff rate on imported semiconductors within the coming week. Significantly, though, Trump also signaled potential flexibility for certain companies in the sector. According to Reuters, Trump told reporters during the flight that he wanted to uncomplicate the semiconductor industry because the US wants to make its chips and other products in the country. While he declined to specify whether products like smartphones might remain exempt from tariffs, he emphasized the need for adaptability. "You have to show a certain flexibility," Trump said. "Nobody should be so rigid." The president's comments come as his administration intensifies its focus on the semiconductor industry. Earlier in the day, Trump announced a national security trade investigation into semiconductors and the broader electronics supply chain. "We are taking a look at Semiconductors and the WHOLE ELECTRONICS SUPPLY CHAIN in the upcoming National Security Tariff Investigations," Trump wrote on social media. The announcement follows Friday's decision by the White House to exclude certain technology products from steep reciprocal tariffs on Chinese imports, a move that briefly raised hopes within the tech industry that consumer goods like phones and laptops might avoid price hikes. However, comments from Commerce Secretary Howard Lutnick on Sunday clarified that critical electronics, including smartphones and computers, would soon face separate tariffs, in addition to those on semiconductors. Lutnick outlined the administration's plans for what he described as "a special focus-type of tariff" targeting electronics and pharmaceuticals, expected to take effect within one to two months. These new duties would be distinct from Trump's reciprocal tariffs, which last week raised levies on Chinese imports to 145 percent. "He's saying they're exempt from the reciprocal tariffs, but they're included in the semiconductor tariffs, which are coming in probably a month or two," Lutnick explained during a television interview. He predicted that these measures would incentivize companies to relocate production to the United States. // Related Stories The escalating trade tensions have drawn a sharp response from Beijing. China retaliated by increasing its tariffs on US imports to 125 percent. In response to Washington's latest moves, China's Ministry of Commerce issued a statement on Sunday indicating it was assessing the impact of the exclusions for technology products announced late last week. "The bell on a tiger's neck can only be untied by the person who tied it," the ministry said, using a proverb that suggests resolution lies with those who initiated the conflict.0 Σχόλια 0 Μοιράστηκε 94 Views
-
WWW.DIGITALTRENDS.COMNYT Mini Crossword today: puzzle answers for Monday, April 14Love crossword puzzles but don’t have all day to sit and solve a full-sized puzzle in your daily newspaper? That’s what The Mini is for! A bite-sized version of the New York Times’ well-known crossword puzzle, The Mini is a quick and easy way to test your crossword skills daily in a lot less time (the average puzzle takes most players just over a minute to solve). While The Mini is smaller and simpler than a normal crossword, it isn’t always easy. Tripping up on one clue can be the difference between a personal best completion time and an embarrassing solve attempt. Recommended Videos Just like our Wordle hints and Connections hints, we’re here to help with The Mini today if you’re stuck and need a little help. Related Below are the answers for the NYT Mini crossword today. New York Times Across Uneaten part of toast, often – CRUST Like stud muffins – HUNKY Prepare for use, as a marker – UNCAP Nick of “48 Hrs.” – NOLTE Strike zone’s lower boundary – KNEES Down Alternative to a chip, in the baking aisle – CHUNK Kind of sentence that keeps going and going, it should have been made into two sentences – RUNON Cousin’s dad – UNCLE Spin out on the ice, say – SKATE Uses a keyboard – TYPES Editors’ Recommendations0 Σχόλια 0 Μοιράστηκε 80 Views
-
ARSTECHNICA.COMAn Ars Technica history of the Internet, part 1Intergalactic Computer Network An Ars Technica history of the Internet, part 1 In our new 3-part series, we remember the people and ideas that made the Internet. Jeremy Reimer – Apr 14, 2025 7:00 am | 5 Credit: Collage by Aurich Lawson Credit: Collage by Aurich Lawson Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only Learn more In a very real sense, the Internet, this marvelous worldwide digital communications network that you’re using right now, was created because one man was annoyed at having too many computer terminals in his office. The year was 1966. Robert Taylor was the director of the Advanced Research Projects Agency’s Information Processing Techniques Office. The agency was created in 1958 by President Eisenhower in response to the launch of Sputnik. So Taylor was in the Pentagon, a great place for acronyms like ARPA and IPTO. He had three massive terminals crammed into a room next to his office. Each one was connected to a different mainframe computer. They all worked slightly differently, and it was frustrating to remember multiple procedures to log in and retrieve information. Author’s re-creation of Bob Taylor’s office with three teletypes. Credit: Rama & Musée Bolo (Wikipedia/Creative Commons), steve lodefink (Wikipedia/Creative Commons), The Computer Museum @ System Source In those days, computers took up entire rooms, and users accessed them through teletype terminals—electric typewriters hooked up to either a serial cable or a modem and a phone line. ARPA was funding multiple research projects across the United States, but users of these different systems had no way to share their resources with each other. Wouldn’t it be great if there was a network that connected all these computers? The dream is given form Taylor’s predecessor, Joseph “J.C.R.” Licklider, had released a memo in 1963 that whimsically described an “Intergalactic Computer Network” that would allow users of different computers to collaborate and share information. The idea was mostly aspirational, and Licklider wasn’t able to turn it into a real project. But Taylor knew that he could. In a 1998 interview, Taylor explained: “In most government funding, there are committees that decide who gets what and who does what. In ARPA, that was not the way it worked. The person who was responsible for the office that was concerned with that particular technology—in my case, computer technology—was the person who made the decision about what to fund and what to do and what not to do. The decision to start the ARPANET was mine, with very little or no red tape.” Taylor marched into the office of his boss, Charles Herzfeld. He described how a network could save ARPA time and money by allowing different institutions to share resources. He suggested starting with a small network of four computers as a proof of concept. “Is it going to be hard to do?” Herzfeld asked. “Oh no. We already know how to do it,” Taylor replied. “Great idea,” Herzfeld said. “Get it going. You’ve got a million dollars more in your budget right now. Go.” Taylor wasn’t lying—at least, not completely. At the time, there were multiple people around the world thinking about computer networking. Paul Baran, working for RAND, published a paper in 1964 describing how a distributed military networking system could be made resilient even if some nodes were destroyed in a nuclear attack. Over in the UK, Donald Davies independently came up with a similar concept (minus the nukes) and invented a term for the way these types of networks would communicate. He called it “packet switching.” On a regular phone network, after some circuit switching, a caller and answerer would be connected via a dedicated wire. They had exclusive use of that wire until the call was completed. Computers communicated in short bursts and didn’t require pauses the way humans did. So it would be a waste for two computers to tie up a whole line for extended periods. But how could many computers talk at the same time without their messages getting mixed up? Packet switching was the answer. Messages were divided into multiple snippets. The order and destination were included with each message packet. The network could then route the packets in any way that made sense. At the destination, all the appropriate packets were put into the correct order and reassembled. It was like moving a house across the country: It was more efficient to send all the parts in separate trucks, each taking their own route to avoid congestion. A simplified diagram of how packet switching works. Credit: Jeremy Reimer By the end of 1966, Taylor had hired a program director, Larry Roberts. Roberts sketched a diagram of a possible network on a napkin and met with his team to propose a design. One problem was that each computer on the network would need to use a big chunk of its resources to manage the packets. In a meeting, Wes Clark passed a note to Roberts saying, “You have the network inside-out.” Clark’s alternative plan was to ship a bunch of smaller computers to connect to each host. These dedicated machines would do all the hard work of creating, moving, and reassembling packets. With the design complete, Roberts sent out a request for proposals for constructing the ARPANET. All they had to do now was pick the winning bid, and the project could begin. BB&N and the IMPs IBM, Control Data Corporation, and AT&T were among the first to respond to the request. They all turned it down. Their reasons were the same: None of these giant companies believed the network could be built. IBM and CDC thought the dedicated computers would be too expensive, but AT&T flat-out said that packet switching wouldn’t work on its phone network. In late 1968, ARPA announced a winner for the bid: Bolt Beranek and Newman. It seemed like an odd choice. BB&N had started as a consulting firm that calculated acoustics for theaters. But the need for calculations led to the creation of a computing division, and its first manager had been none other than J.C.R. Licklider. In fact, some BB&N employees had been working on a plan to build a network even before the ARPA bid was sent out. Robert Kahn led the team that drafted BB&N’s proposal. Their plan was to create a network of “Interface Message Processors,” or IMPs, out of Honeywell 516 computers. They were ruggedized versions of the DDP-516 16-bit minicomputer. Each had 24 kilobytes of core memory and no mass storage other than a paper tape reader, and each cost $80,000 (about $700,000 today). In comparison, an IBM 360 mainframe cost between $7 million and $12 million at the time. An original IMP, the world’s first router. It was the size of a large refrigerator. Credit: Steve Jurvetson (CC BY 2.0) The 516’s rugged appearance appealed to BB&N, who didn’t want a bunch of university students tampering with its IMPs. The computer came with no operating system, but it didn’t really have enough RAM for one. The software to control the IMPs was written on bare metal using the 516’s assembly language. One of the developers was Will Crowther, who went on to create the first computer adventure game. One other hurdle remained before the IMPs could be put to use: The Honeywell design was missing certain components needed to handle input and output. BB&N employees were dismayed that the first 516, which they named IMP-0, didn’t have working versions of the hardware additions they had requested. It fell on Ben Barker, a brilliant undergrad student interning at BB&N, to manually fix the machine. Barker was the best choice, even though he had slight palsy in his hands. After several stressful 16-hour days wrapping and unwrapping wires, all the changes were complete and working. IMP-0 was ready. In the meantime, Steve Crocker at the University of California, Los Angeles, was working on a set of software specifications for the host computers. It wouldn’t matter if the IMPs were perfect at sending and receiving messages if the computers themselves didn’t know what to do with them. Because the host computers were part of important academic research, Crocker didn’t want to seem like he was a dictator telling people what to do with their machines. So he titled his draft a “Request for Comments,” or RFC. This one act of politeness forever changed the nature of computing. Every change since has been done as an RFC, and the culture of asking for comments pervades the tech industry even today. RFC No. 1 proposed two types of host software. The first was the simplest possible interface, in which a computer pretended to be a dumb terminal. This was dubbed a “terminal emulator,” and if you’ve ever done any administration on a server, you’ve probably used one. The second was a more complex protocol that could be used to transfer large files. This became FTP, which is still used today. A single IMP connected to one computer wasn’t much of a network. So it was very exciting in September 1969 when IMP-1 was delivered to BB&N and then shipped via air freight to UCLA. The first test of the ARPANET was done with simultaneous phone support. The plan was to type “LOGIN” to start a login sequence. This was the exchange: “Did you get the L?” “I got the L!” “Did you get the O?” “I got the O!” “Did you get the G?” “Oh no, the computer crashed!” It was an inauspicious beginning. The computer on the other end was helpfully filling in the “GIN” part of “LOGIN,” but the terminal emulator wasn’t expecting three characters at once and locked up. It was the first time that autocomplete had ruined someone’s day. The bug was fixed, and the test completed successfully. IMP-2, IMP-3, and IMP-4 were delivered to the Stanford Research Institute (where Doug Engelbart was keen to expand his vision of connecting people), UC Santa Barbara, and the University of Utah. Now that the four-node test network was complete, the team at BB&N could work with the researchers at each node to put the ARPANET through its paces. They deliberately created the first ever denial of service attack in January 1970, flooding the network with packets until it screeched to a halt. The original ARPANET, predecessor of the Internet. Circles are IMPs, and rectangles are computers. Credit: DARPA Surprisingly, many of the administrators of the early ARPANET nodes weren’t keen to join the network. They didn’t like the idea of anyone else being able to use resources on “their” computers. Taylor reminded them that their hardware and software projects were mostly ARPA-funded, so they couldn’t opt out. The next month, Stephen Carr, Stephen Crocker, and Vint Cerf released RFC No. 33. It described a Network Control Protocol (NCP) that standardized how the hosts would communicate with each other. After this was adopted, the network was off and running. J.C.R. Licklider, Bob Taylor, Larry Roberts, Steve Crocker, and Vint Cerf. Credit: US National Library of Medicine, WIRED, Computer Timeline, Steve Crocker, Vint Cerf The ARPANET grew significantly over the next few years. Important events included the first ever email between two different computers, sent by Roy Tomlinson in July 1972. Another groundbreaking demonstration involved a PDP-10 in Harvard simulating, in real-time, an aircraft landing on a carrier. The data was sent over the ARPANET to a MIT-based graphics terminal, and the wireframe graphical view was shipped back to a PDP-1 at Harvard and displayed on a screen. Although it was primitive and slow, it was technically the first gaming stream. A big moment came in October 1972 at the International Conference on Computer Communication. This was the first time the network had been demonstrated to the public. Interest in the ARPANET was growing, and people were excited. A group of AT&T executives noticed a brief crash and laughed, confident that they were correct in thinking that packet switching would never work. Overall, however, the demonstration was a resounding success. But the ARPANET was no longer the only network out there. The two keystrokes on a Model 33 Teletype that changed history. Credit: Marcin Wichary (CC BY 2.0) A network of networks The rest of the world had not been standing still. In Hawaii, Norman Abramson and Franklin Kuo created ALOHAnet, which connected computers on the islands using radio. It was the first public demonstration of a wireless packet switching network. In the UK, Donald Davies’ team developed the National Physical Laboratory (NPL) network. It seemed like a good idea to start connecting these networks together, but they all used different protocols, packet formats, and transmission rates. In 1972, the heads of several national networking projects created an International Networking Working Group. Cerf was chosen to lead it. The first attempt to bridge this gap was SATNET, also known as the Atlantic Packet Satellite Network. Using satellite links, it connected the US-based ARPANET with networks in the UK. Unfortunately, SATNET itself used its own set of protocols. In true tech fashion, an attempt to make a universal standard had created one more standard instead. Robert Kahn asked Vint Cerf to try and fix these problems once and for all. They came up with a new plan called the Transmission Control Protocol, or TCP. The idea was to connect different networks through specialized computers, called “gateways,” that translated and forwarded packets. TCP was like an envelope for packets, making sure they got to the right destination on the correct network. Because some networks were not guaranteed to be reliable, when one computer successfully received a complete and undamaged message, it would send an acknowledgement (ACK) back to the sender. If the ACK wasn’t received in a certain amount of time, the message was retransmitted. In December 1974, Cerf, Yogen Dalal, and Carl Sunshine wrote a complete specification for TCP. Two years later, Cerf and Kahn, along with a dozen others, demonstrated the first three-network system. The demo connected packet radio, the ARPANET, and SATNET, all using TCP. Afterward, Cerf, Jon Postel, and Danny Cohen suggested a small but important change: They should take out all the routing information and put it into a new protocol, called the Internet Protocol (IP). All the remaining stuff, like breaking and reassembling messages, detecting errors, and retransmission, would stay in TCP. Thus, in 1978, the protocol officially became known as, and was forever thereafter, TCP/IP. A map of the Internet in 1977. White dots are IMPs, and rectangles are host computers. Jagged lines connect to other networks. Credit: The Computer History Museum If the story of creating the Internet was a movie, the release of TCP/IP would have been the triumphant conclusion. But things weren’t so simple. The world was changing, and the path ahead was murky at best. At the time, joining the ARPANET required leasing high-speed phone lines for $100,000 per year. This limited it to large universities, research companies, and defense contractors. The situation led the National Science Foundation (NSF) to propose a new network that would be cheaper to operate. Other educational networks arose at around the same time. While it made sense to connect these networks to the growing Internet, there was no guarantee that this would continue. And there were other, larger forces at work. By the end of the 1970s, computers had improved significantly. The invention of the microprocessor set the stage for smaller, cheaper computers that were just beginning to enter people’s homes. Bulky teletypes were being replaced with sleek, TV-like terminals. The first commercial online service, CompuServe, was released to the public in 1979. For just $5 per hour, you could connect to a private network, get weather and financial reports, and trade gossip with other users. At first, these systems were completely separate from the Internet. But they grew quickly. By 1987, CompuServe had 380,000 subscribers. A magazine ad for CompuServe from 1980. Credit: marbleriver Meanwhile, the adoption of TCP/IP was not guaranteed. At the beginning of the 1980s, the Open Systems Interconnection (OSI) group at the International Standardization Organization (ISO) decided that what the world needed was more acronyms—and also a new, global, standardized networking model. The OSI model was first drafted in 1980, but it wasn’t published until 1984. Nevertheless, many European governments, and even the US Department of Defense, planned to transition from TCP/IP to OSI. It seemed like this new standard was inevitable. The seven-layer OSI model. If you ever thought there were too many layers, you’re not alone. Credit: BlueCat Networks While the world waited for OSI, the Internet continued to grow and evolve. In 1981, the fourth version of the IP protocol, IPv4, was released. On January 1, 1983, the ARPANET itself fully transitioned to using TCP/IP. This date is sometimes referred to as the “birth of the Internet,” although from a user’s perspective, the network still functioned the same way it had for years. A map of the Internet from 1982. Ovals are networks, and rectangles are gateways. Hosts are not shown, but number in the hundreds. Note the appearance of modern-looking IPv4 addresses. Credit: Jon Postel In 1986, the NFSNET came online, running under TCP/IP and connected to the rest of the Internet. It also used a new standard, the Domain Name System (DNS). This system, still in use today, used easy-to-remember names to point to a machine’s individual IP address. Computer names were assigned “top-level” domains based on their purpose, so you could connect to “frodo.edu” at an educational institution, or “frodo.gov” at a governmental one. The NFSNET grew rapidly, dwarfing the ARPANET in size. In 1989, the original ARPANET was decommissioned. The IMPs, long since obsolete, were retired. However, all the ARPANET hosts were successfully migrated to other Internet networks. Like a Ship of Theseus, the ARPANET lived on even after every component of it was replaced. The exponential growth of the ARPANET/Internet during its first two decades. Credit: Jeremy Reimer Still, the experts and pundits predicted that all of these systems would eventually have to transfer over to the OSI model. The people who had built the Internet were not impressed. In 1987, writing RFC No. 1,000, Crocker said, “If we had only consulted the ancient mystics, we would have seen immediately that seven layers were required.” The Internet pioneers felt they had spent many years refining and improving a working system. But now, OSI had arrived with a bunch of complicated standards and expected everyone to adopt their new design. Vint Cerf had a more pragmatic outlook. In 1982, he left ARPA for a new job at MCI, where he helped build the first commercial email system (MCI Mail) that was connected to the Internet. While at MCI, he contacted researchers at IBM, Digital, and Hewlett-Packard and convinced them to experiment with TCP/IP. Leadership at these companies still officially supported OSI, however. The debate raged on through the latter half of the 1980s and into the early 1990s. Tired of the endless arguments, Cerf contacted the head of the National Institute of Standards and Technology (NIST) and asked him to write a blue ribbon report comparing OSI and TCP/IP. Meanwhile, while planning a successor to IPv4, the Internet Advisory Board (IAB) was looking at the OSI Connectionless Network Protocol and its 128-bit addressing for inspiration. In an interview with Ars, Vint Cerf explained what happened next. “It was deliberately misunderstood by firebrands in the IETF [Internet Engineering Task Force] that we are traitors by adopting OSI,” he said. “They raised a gigantic hoo-hah. The IAB was deposed, and the authority in the system flipped. IAB used to be the decision makers, but the fight flips it, and IETF becomes the standard maker.” To calm everybody down, Cerf performed a striptease at a meeting of the IETF in 1992. He revealed a T-shirt that said “IP ON EVERYTHING.” At the same meeting, David Clark summarized the feelings of the IETF by saying, “We reject kings, presidents, and voting. We believe in rough consensus and running code.” Vint Cerf strips down to the bare essentials. Credit: Boardwatch and Light Reading The fate of the Internet The split design of TCP/IP, which was a small technical choice at the time, had long-lasting political implications. In 2001, David Clark and Marjory Blumenthal wrote a paper that looked back on the Protocol War. They noted that the Internet’s complex functions were performed at the endpoints, while the network itself ran only the IP part and was concerned simply with moving data from place to place. These “end-to-end principles” formed the basis of “… the ‘Internet Philosophy’: freedom of action, user empowerment, end-user responsibility for actions undertaken, and lack of controls ‘in’ the Net that limit or regulate what users can do,” they said. In other words, the battle between TCP/IP and OSI wasn’t just about two competing sets of acronyms. On the one hand, you had a small group of computer scientists who had spent many years building a relatively open network and wanted to see it continue under their own benevolent guidance. On the other hand, you had a huge collective of powerful organizations that believed they should be in charge of the future of the Internet—and maybe the behavior of everyone on it. But this impossible argument and the ultimate fate of the Internet was about to be decided, and not by governments, committees, or even the IETF. The world was changed forever by the actions of one man. He was a mild-mannered computer scientist, born in England and working for a physics research institute in Switzerland. That’s the story covered in the next article in our series. Jeremy Reimer Senior Niche Technology Historian Jeremy Reimer Senior Niche Technology Historian I'm a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP. 5 Comments0 Σχόλια 0 Μοιράστηκε 72 Views
-
WWW.INFORMATIONWEEK.COMWhat Top 3 Principles Define Your Role as a CIO and a CTO?TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.What Top 3 Principles Define Your Role as a CIO and a CTO?What Top 3 Principles Define Your Role as a CIO and a CTO?The CIO of IBM and the CIO of NMI discuss some foundational elements that help them navigate the shifting demands of providing leadership on tech.Joao-Pierre S. Ruth, Senior EditorApril 14, 2025The duties of C-suite tech leadership at enterprises are changing rapidly of late. AI shook up strategies at many companies and can lead to new demands on CIOs, CTOs, and others responsible for technology plans and use.The core principles that guide CIOs and CTOs can be essential for navigating such times, especially when organizations look to them for direction.In this episode, Matt Lyteson, CIO of IBM, and Phillip Goericke, CTO of NMI, share some key principles that define their respective roles at their organizations. They also discuss where they picked up some of the lessons that shaped those principles, how their jobs have changed since they got their starts, and whom they look to for inspiration as leaders -- as well as what they wish they knew when they got started. Listen to the full episode here.About the AuthorJoao-Pierre S. RuthSenior EditorJoao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight.See more from Joao-Pierre S. RuthWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like0 Σχόλια 0 Μοιράστηκε 84 Views