• An excerpt from a new book by Sérgio Ferro, published by MACK Books, showcases the architect’s moment of disenchantment

    Last year, MACK Books published Architecture from Below, which anthologized writings by the French Brazilian architect, theorist, and painter Sérgio Ferro.Now, MACK follows with Design and the Building Site and Complementary Essays, the second in the trilogy of books dedicated to Ferro’s scholarship. The following excerpt of the author’s 2023 preface to the English edition, which preserves its British phrasing, captures Ferro’s realization about the working conditions of construction sites in Brasília. The sentiment is likely relatable even today for young architects as they discover how drawings become buildings. Design and the Building Site and Complementary Essays will be released on May 22.

    If I remember correctly, it was in 1958 or 1959, when Rodrigo and I were second- or third year architecture students at FAUUSP, that my father, the real estate developer Armando Simone Pereira, commissioned us to design two large office buildings and eleven shops in Brasilia, which was then under construction. Of course, we were not adequately prepared for such an undertaking. Fortunately, Oscar Niemeyer and his team, who were responsible for overseeing the construction of the capital, had drawn up a detailed document determining the essential characteristics of all the private sector buildings. We followed these prescriptions to the letter, which saved us from disaster.
    Nowadays, it is hard to imagine the degree to which the construction of Brasilia inspired enthusiasm and professional pride in the country’s architects. And in the national imagination, the city’s establishment in the supposedly unpopulated hinterland evoked a re-founding of Brazil. Up until that point, the occupation of our immense territory had been reduced to a collection of arborescent communication routes, generally converging upon some river, following it up to the Atlantic Ocean. Through its ports, agricultural or extractive commodities produced by enslaved peoples or their substitutes passed towards the metropolises; goods were exchanged in the metropolises for more elaborate products, which took the opposite route. Our national identity was summed up in a few symbols, such as the anthem or the flag, and this scattering of paths pointing overseas. Brasilia would radically change this situation, or so we believed. It would create a central hub where the internal communication routes could converge, linking together hithertoseparate junctions, stimulating trade and economic progress in the country’s interior. It was as if, for the first time, we were taking care of ourselves. At the nucleus of this centripetal movement, architecture would embody the renaissance. And at the naval of the nucleus, the symbolic mandala of this utopia: the cathedral.
    Rodrigo and I got caught up in the euphoria. And perhaps more so than our colleagues, because we were taking part in the adventure with ‘our’ designs. The reality was very different — but we did not know that yet.

    At that time, architects in Brazil were responsible for verifying that the construction was in line with the design. We had already monitored some of our first building sites. But the construction company in charge of them, Osmar Souza e Silva’s CENPLA, specialized in the building sites of modernist architects from the so-called Escola Paulista led by Vilanova Artigas. Osmar was very attentive to his clients and his workers, who formed a supportive and helpful team. He was even more careful with us, because he knew how inexperienced we were. I believe that the CENPLA was particularly important in São Paulo modernism: with its congeniality, it facilitated experimentation, but for the same reason, it deceived novices like us about the reality of other building sites.
    Consequently, Rodrigo and I travelled to Brasilia several times to check that the constructions followed ‘our’ designs and to resolve any issues. From the very first trip, our little bubble burst. Our building sites, like all the others in the future capital, bore no relation to Osmar’s. They were more like a branch of hell. A huge, muddy wasteland, in which a few cranes, pile drivers, tractors, and excavators dotted the mound of scaffolding occupied by thousands of skinny, seemingly exhausted wretches, who were nevertheless driven on by the shouts of master builders and foremen, in turn pressured by the imminence of the fateful inauguration date. Surrounding or huddled underneath the marquees of buildings under construction, entire families, equally skeletal and ragged, were waiting for some accident or death to open up a vacancy. In contact only with the master builders, and under close surveillance so we would not speak to the workers, we were not allowed to see what comrades who had worked on these sites later told us in prison: suicide abounded; escape was known to be futile in the unpopulated surroundings with no viable roads; fatal accidents were often caused by weakness due to chronic diarrhoea, brought on by rotten food that came from far away; outright theft took place in the calculation of wages and expenses in the contractor’s grocery store; camps were surrounded by law enforcement.
    I repeat this anecdote yet again not to invoke the benevolence of potential readers, but rather to point out the conditions that, in my opinion, allowed two studentsstill in their professional infancy to quickly adopt positions that were contrary to the usual stance of architects. As the project was more Oscar Niemeyer’s than it was our own, we did not have the same emotional attachment that is understandably engendered between real authors and their designs. We had not yet been imbued with the charm and aura of the métier. And the only building sites we had visited thus far, Osmar’s, were incomparable to those we discovered in Brasilia. In short, our youthfulness and unpreparedness up against an unbearable situation made us react almost immediately to the profession’s satisfied doxa.

    Unprepared and young perhaps, but already with Marx by our side. Rodrigo and I joined the student cell of the Brazilian Communist Party during our first year at university. In itself, this did not help us much: the Party’s Marxism, revised in the interests of the USSR, was pitiful. Even high-level leaders rarely went beyond the first chapter of Capital. But at the end of the 1950s, the effervescence of the years to come was already nascent: this extraordinary revivalthe rediscovery of Marxism and the great dialectical texts and traditions in the 1960s: an excitement that identifies a forgotten or repressed moment of the past as the new and subversive, and learns the dialectical grammar of a Hegel or an Adorno, a Marx or a Lukács, like a foreign language that has resources unavailable in our own.
    And what is more: the Chinese and Cuban revolutions, the war in Vietnam, guerrilla warfare of all kinds, national liberation movements, and a rare libertarian disposition in contemporary history, totally averse to fanaticism and respect for ideological apparatuses ofstate or institution. Going against the grain was almost the norm. We were of course no more than contemporaries of our time. We were soon able to position ourselves from chapters 13, 14, and 15 of Capital, but only because we could constantly cross-reference Marx with our observations from well-contrasted building sites and do our own experimenting. As soon as we identified construction as manufacture, for example, thanks to the willingness and even encouragement of two friends and clients, Boris Fausto and Bernardo Issler, I was able to test both types of manufacture — organic and heterogeneous — on similar-sized projects taking place simultaneously, in order to find out which would be most convenient for the situation in Brazil, particularly in São Paulo. Despite the scientific shortcomings of these tests, they sufficed for us to select organic manufacture. Arquitetura Nova had defined its line of practice, studies, and research.
    There were other sources that were central to our theory and practice. Flávio Império was one of the founders of the Teatro de Arena, undoubtedly the vanguard of popular, militant theatre in Brazil. He won practically every set design award. He brought us his marvelous findings in spatial condensation and malleability, and in the creative diversion of techniques and material—appropriate devices for an underdeveloped country. This is what helped us pave the way to reformulating the reigning design paradigms. 

    We had to do what Flávio had done in the theatre: thoroughly rethink how to be an architect. Upend the perspective. The way we were taught was to start from a desired result; then others would take care of getting there, no matter how. We, on the other hand, set out to go down to the building site and accompany those carrying out the labor itself, those who actually build, the formally subsumed workers in manufacture who are increasingly deprived of the knowledge and know-how presupposed by this kind of subsumption. We should have been fostering the reconstitution of this knowledge and know-how—not so as to fulfil this assumption, but in order to reinvigorate the other side of this assumption according to Marx: the historical rebellion of the manufacture worker, especially the construction worker. We had to rekindle the demand that fueled this rebellion: total self-determination, and not just that of the manual operation as such. Our aim was above all political and ethical. Aesthetics only mattered by way of what it included—ethics. Instead of estética, we wrote est ética. We wanted to make building sites into nests for the return of revolutionary syndicalism, which we ourselves had yet to discover.
    Sérgio Ferro, born in Brazil in 1938, studied architecture at FAUUSP, São Paulo. In the 1960s, he joined the Brazilian communist party and started, along with Rodrigo Lefevre and Flávio Império, the collective known as Arquitetura Nova. After being arrested by the military dictatorship that took power in Brazil in 1964, he moved to France as an exile. As a painter and a professor at the École Nationale Supérieure d’Architecture de Grenoble, where he founded the Dessin/Chantier laboratory, he engaged in extensive research which resulted in several publications, exhibitions, and awards in Brazil and in France, including the title of Chevalier des Arts et des Lettres in 1992. Following his retirement from teaching, Ferro continues to research, write, and paint.
    #excerpt #new #book #sérgio #ferro
    An excerpt from a new book by Sérgio Ferro, published by MACK Books, showcases the architect’s moment of disenchantment
    Last year, MACK Books published Architecture from Below, which anthologized writings by the French Brazilian architect, theorist, and painter Sérgio Ferro.Now, MACK follows with Design and the Building Site and Complementary Essays, the second in the trilogy of books dedicated to Ferro’s scholarship. The following excerpt of the author’s 2023 preface to the English edition, which preserves its British phrasing, captures Ferro’s realization about the working conditions of construction sites in Brasília. The sentiment is likely relatable even today for young architects as they discover how drawings become buildings. Design and the Building Site and Complementary Essays will be released on May 22. If I remember correctly, it was in 1958 or 1959, when Rodrigo and I were second- or third year architecture students at FAUUSP, that my father, the real estate developer Armando Simone Pereira, commissioned us to design two large office buildings and eleven shops in Brasilia, which was then under construction. Of course, we were not adequately prepared for such an undertaking. Fortunately, Oscar Niemeyer and his team, who were responsible for overseeing the construction of the capital, had drawn up a detailed document determining the essential characteristics of all the private sector buildings. We followed these prescriptions to the letter, which saved us from disaster. Nowadays, it is hard to imagine the degree to which the construction of Brasilia inspired enthusiasm and professional pride in the country’s architects. And in the national imagination, the city’s establishment in the supposedly unpopulated hinterland evoked a re-founding of Brazil. Up until that point, the occupation of our immense territory had been reduced to a collection of arborescent communication routes, generally converging upon some river, following it up to the Atlantic Ocean. Through its ports, agricultural or extractive commodities produced by enslaved peoples or their substitutes passed towards the metropolises; goods were exchanged in the metropolises for more elaborate products, which took the opposite route. Our national identity was summed up in a few symbols, such as the anthem or the flag, and this scattering of paths pointing overseas. Brasilia would radically change this situation, or so we believed. It would create a central hub where the internal communication routes could converge, linking together hithertoseparate junctions, stimulating trade and economic progress in the country’s interior. It was as if, for the first time, we were taking care of ourselves. At the nucleus of this centripetal movement, architecture would embody the renaissance. And at the naval of the nucleus, the symbolic mandala of this utopia: the cathedral. Rodrigo and I got caught up in the euphoria. And perhaps more so than our colleagues, because we were taking part in the adventure with ‘our’ designs. The reality was very different — but we did not know that yet. At that time, architects in Brazil were responsible for verifying that the construction was in line with the design. We had already monitored some of our first building sites. But the construction company in charge of them, Osmar Souza e Silva’s CENPLA, specialized in the building sites of modernist architects from the so-called Escola Paulista led by Vilanova Artigas. Osmar was very attentive to his clients and his workers, who formed a supportive and helpful team. He was even more careful with us, because he knew how inexperienced we were. I believe that the CENPLA was particularly important in São Paulo modernism: with its congeniality, it facilitated experimentation, but for the same reason, it deceived novices like us about the reality of other building sites. Consequently, Rodrigo and I travelled to Brasilia several times to check that the constructions followed ‘our’ designs and to resolve any issues. From the very first trip, our little bubble burst. Our building sites, like all the others in the future capital, bore no relation to Osmar’s. They were more like a branch of hell. A huge, muddy wasteland, in which a few cranes, pile drivers, tractors, and excavators dotted the mound of scaffolding occupied by thousands of skinny, seemingly exhausted wretches, who were nevertheless driven on by the shouts of master builders and foremen, in turn pressured by the imminence of the fateful inauguration date. Surrounding or huddled underneath the marquees of buildings under construction, entire families, equally skeletal and ragged, were waiting for some accident or death to open up a vacancy. In contact only with the master builders, and under close surveillance so we would not speak to the workers, we were not allowed to see what comrades who had worked on these sites later told us in prison: suicide abounded; escape was known to be futile in the unpopulated surroundings with no viable roads; fatal accidents were often caused by weakness due to chronic diarrhoea, brought on by rotten food that came from far away; outright theft took place in the calculation of wages and expenses in the contractor’s grocery store; camps were surrounded by law enforcement. I repeat this anecdote yet again not to invoke the benevolence of potential readers, but rather to point out the conditions that, in my opinion, allowed two studentsstill in their professional infancy to quickly adopt positions that were contrary to the usual stance of architects. As the project was more Oscar Niemeyer’s than it was our own, we did not have the same emotional attachment that is understandably engendered between real authors and their designs. We had not yet been imbued with the charm and aura of the métier. And the only building sites we had visited thus far, Osmar’s, were incomparable to those we discovered in Brasilia. In short, our youthfulness and unpreparedness up against an unbearable situation made us react almost immediately to the profession’s satisfied doxa. Unprepared and young perhaps, but already with Marx by our side. Rodrigo and I joined the student cell of the Brazilian Communist Party during our first year at university. In itself, this did not help us much: the Party’s Marxism, revised in the interests of the USSR, was pitiful. Even high-level leaders rarely went beyond the first chapter of Capital. But at the end of the 1950s, the effervescence of the years to come was already nascent: this extraordinary revivalthe rediscovery of Marxism and the great dialectical texts and traditions in the 1960s: an excitement that identifies a forgotten or repressed moment of the past as the new and subversive, and learns the dialectical grammar of a Hegel or an Adorno, a Marx or a Lukács, like a foreign language that has resources unavailable in our own. And what is more: the Chinese and Cuban revolutions, the war in Vietnam, guerrilla warfare of all kinds, national liberation movements, and a rare libertarian disposition in contemporary history, totally averse to fanaticism and respect for ideological apparatuses ofstate or institution. Going against the grain was almost the norm. We were of course no more than contemporaries of our time. We were soon able to position ourselves from chapters 13, 14, and 15 of Capital, but only because we could constantly cross-reference Marx with our observations from well-contrasted building sites and do our own experimenting. As soon as we identified construction as manufacture, for example, thanks to the willingness and even encouragement of two friends and clients, Boris Fausto and Bernardo Issler, I was able to test both types of manufacture — organic and heterogeneous — on similar-sized projects taking place simultaneously, in order to find out which would be most convenient for the situation in Brazil, particularly in São Paulo. Despite the scientific shortcomings of these tests, they sufficed for us to select organic manufacture. Arquitetura Nova had defined its line of practice, studies, and research. There were other sources that were central to our theory and practice. Flávio Império was one of the founders of the Teatro de Arena, undoubtedly the vanguard of popular, militant theatre in Brazil. He won practically every set design award. He brought us his marvelous findings in spatial condensation and malleability, and in the creative diversion of techniques and material—appropriate devices for an underdeveloped country. This is what helped us pave the way to reformulating the reigning design paradigms.  We had to do what Flávio had done in the theatre: thoroughly rethink how to be an architect. Upend the perspective. The way we were taught was to start from a desired result; then others would take care of getting there, no matter how. We, on the other hand, set out to go down to the building site and accompany those carrying out the labor itself, those who actually build, the formally subsumed workers in manufacture who are increasingly deprived of the knowledge and know-how presupposed by this kind of subsumption. We should have been fostering the reconstitution of this knowledge and know-how—not so as to fulfil this assumption, but in order to reinvigorate the other side of this assumption according to Marx: the historical rebellion of the manufacture worker, especially the construction worker. We had to rekindle the demand that fueled this rebellion: total self-determination, and not just that of the manual operation as such. Our aim was above all political and ethical. Aesthetics only mattered by way of what it included—ethics. Instead of estética, we wrote est ética. We wanted to make building sites into nests for the return of revolutionary syndicalism, which we ourselves had yet to discover. Sérgio Ferro, born in Brazil in 1938, studied architecture at FAUUSP, São Paulo. In the 1960s, he joined the Brazilian communist party and started, along with Rodrigo Lefevre and Flávio Império, the collective known as Arquitetura Nova. After being arrested by the military dictatorship that took power in Brazil in 1964, he moved to France as an exile. As a painter and a professor at the École Nationale Supérieure d’Architecture de Grenoble, where he founded the Dessin/Chantier laboratory, he engaged in extensive research which resulted in several publications, exhibitions, and awards in Brazil and in France, including the title of Chevalier des Arts et des Lettres in 1992. Following his retirement from teaching, Ferro continues to research, write, and paint. #excerpt #new #book #sérgio #ferro
    www.archpaper.com
    Last year, MACK Books published Architecture from Below, which anthologized writings by the French Brazilian architect, theorist, and painter Sérgio Ferro. (Douglas Spencer reviewed it for AN.) Now, MACK follows with Design and the Building Site and Complementary Essays, the second in the trilogy of books dedicated to Ferro’s scholarship. The following excerpt of the author’s 2023 preface to the English edition, which preserves its British phrasing, captures Ferro’s realization about the working conditions of construction sites in Brasília. The sentiment is likely relatable even today for young architects as they discover how drawings become buildings. Design and the Building Site and Complementary Essays will be released on May 22. If I remember correctly, it was in 1958 or 1959, when Rodrigo and I were second- or third year architecture students at FAUUSP, that my father, the real estate developer Armando Simone Pereira, commissioned us to design two large office buildings and eleven shops in Brasilia, which was then under construction. Of course, we were not adequately prepared for such an undertaking. Fortunately, Oscar Niemeyer and his team, who were responsible for overseeing the construction of the capital, had drawn up a detailed document determining the essential characteristics of all the private sector buildings. We followed these prescriptions to the letter, which saved us from disaster. Nowadays, it is hard to imagine the degree to which the construction of Brasilia inspired enthusiasm and professional pride in the country’s architects. And in the national imagination, the city’s establishment in the supposedly unpopulated hinterland evoked a re-founding of Brazil. Up until that point, the occupation of our immense territory had been reduced to a collection of arborescent communication routes, generally converging upon some river, following it up to the Atlantic Ocean. Through its ports, agricultural or extractive commodities produced by enslaved peoples or their substitutes passed towards the metropolises; goods were exchanged in the metropolises for more elaborate products, which took the opposite route. Our national identity was summed up in a few symbols, such as the anthem or the flag, and this scattering of paths pointing overseas. Brasilia would radically change this situation, or so we believed. It would create a central hub where the internal communication routes could converge, linking together hithertoseparate junctions, stimulating trade and economic progress in the country’s interior. It was as if, for the first time, we were taking care of ourselves. At the nucleus of this centripetal movement, architecture would embody the renaissance. And at the naval of the nucleus, the symbolic mandala of this utopia: the cathedral. Rodrigo and I got caught up in the euphoria. And perhaps more so than our colleagues, because we were taking part in the adventure with ‘our’ designs. The reality was very different — but we did not know that yet. At that time, architects in Brazil were responsible for verifying that the construction was in line with the design. We had already monitored some of our first building sites. But the construction company in charge of them, Osmar Souza e Silva’s CENPLA, specialized in the building sites of modernist architects from the so-called Escola Paulista led by Vilanova Artigas (which we aspired to be a part of, like the pretentious students we were). Osmar was very attentive to his clients and his workers, who formed a supportive and helpful team. He was even more careful with us, because he knew how inexperienced we were. I believe that the CENPLA was particularly important in São Paulo modernism: with its congeniality, it facilitated experimentation, but for the same reason, it deceived novices like us about the reality of other building sites. Consequently, Rodrigo and I travelled to Brasilia several times to check that the constructions followed ‘our’ designs and to resolve any issues. From the very first trip, our little bubble burst. Our building sites, like all the others in the future capital, bore no relation to Osmar’s. They were more like a branch of hell. A huge, muddy wasteland, in which a few cranes, pile drivers, tractors, and excavators dotted the mound of scaffolding occupied by thousands of skinny, seemingly exhausted wretches, who were nevertheless driven on by the shouts of master builders and foremen, in turn pressured by the imminence of the fateful inauguration date. Surrounding or huddled underneath the marquees of buildings under construction, entire families, equally skeletal and ragged, were waiting for some accident or death to open up a vacancy. In contact only with the master builders, and under close surveillance so we would not speak to the workers, we were not allowed to see what comrades who had worked on these sites later told us in prison: suicide abounded; escape was known to be futile in the unpopulated surroundings with no viable roads; fatal accidents were often caused by weakness due to chronic diarrhoea, brought on by rotten food that came from far away; outright theft took place in the calculation of wages and expenses in the contractor’s grocery store; camps were surrounded by law enforcement. I repeat this anecdote yet again not to invoke the benevolence of potential readers, but rather to point out the conditions that, in my opinion, allowed two students (Flávio Império joined us a little later) still in their professional infancy to quickly adopt positions that were contrary to the usual stance of architects. As the project was more Oscar Niemeyer’s than it was our own, we did not have the same emotional attachment that is understandably engendered between real authors and their designs. We had not yet been imbued with the charm and aura of the métier. And the only building sites we had visited thus far, Osmar’s, were incomparable to those we discovered in Brasilia. In short, our youthfulness and unpreparedness up against an unbearable situation made us react almost immediately to the profession’s satisfied doxa. Unprepared and young perhaps, but already with Marx by our side. Rodrigo and I joined the student cell of the Brazilian Communist Party during our first year at university. In itself, this did not help us much: the Party’s Marxism, revised in the interests of the USSR, was pitiful. Even high-level leaders rarely went beyond the first chapter of Capital. But at the end of the 1950s, the effervescence of the years to come was already nascent:  […] this extraordinary revival […] the rediscovery of Marxism and the great dialectical texts and traditions in the 1960s: an excitement that identifies a forgotten or repressed moment of the past as the new and subversive, and learns the dialectical grammar of a Hegel or an Adorno, a Marx or a Lukács, like a foreign language that has resources unavailable in our own. And what is more: the Chinese and Cuban revolutions, the war in Vietnam, guerrilla warfare of all kinds, national liberation movements, and a rare libertarian disposition in contemporary history, totally averse to fanaticism and respect for ideological apparatuses of (any) state or institution. Going against the grain was almost the norm. We were of course no more than contemporaries of our time. We were soon able to position ourselves from chapters 13, 14, and 15 of Capital, but only because we could constantly cross-reference Marx with our observations from well-contrasted building sites and do our own experimenting. As soon as we identified construction as manufacture, for example, thanks to the willingness and even encouragement of two friends and clients, Boris Fausto and Bernardo Issler, I was able to test both types of manufacture — organic and heterogeneous — on similar-sized projects taking place simultaneously, in order to find out which would be most convenient for the situation in Brazil, particularly in São Paulo. Despite the scientific shortcomings of these tests, they sufficed for us to select organic manufacture. Arquitetura Nova had defined its line of practice, studies, and research. There were other sources that were central to our theory and practice. Flávio Império was one of the founders of the Teatro de Arena, undoubtedly the vanguard of popular, militant theatre in Brazil. He won practically every set design award. He brought us his marvelous findings in spatial condensation and malleability, and in the creative diversion of techniques and material—appropriate devices for an underdeveloped country. This is what helped us pave the way to reformulating the reigning design paradigms.  We had to do what Flávio had done in the theatre: thoroughly rethink how to be an architect. Upend the perspective. The way we were taught was to start from a desired result; then others would take care of getting there, no matter how. We, on the other hand, set out to go down to the building site and accompany those carrying out the labor itself, those who actually build, the formally subsumed workers in manufacture who are increasingly deprived of the knowledge and know-how presupposed by this kind of subsumption. We should have been fostering the reconstitution of this knowledge and know-how—not so as to fulfil this assumption, but in order to reinvigorate the other side of this assumption according to Marx: the historical rebellion of the manufacture worker, especially the construction worker. We had to rekindle the demand that fueled this rebellion: total self-determination, and not just that of the manual operation as such. Our aim was above all political and ethical. Aesthetics only mattered by way of what it included—ethics. Instead of estética, we wrote est ética [this is ethics]. We wanted to make building sites into nests for the return of revolutionary syndicalism, which we ourselves had yet to discover. Sérgio Ferro, born in Brazil in 1938, studied architecture at FAUUSP, São Paulo. In the 1960s, he joined the Brazilian communist party and started, along with Rodrigo Lefevre and Flávio Império, the collective known as Arquitetura Nova. After being arrested by the military dictatorship that took power in Brazil in 1964, he moved to France as an exile. As a painter and a professor at the École Nationale Supérieure d’Architecture de Grenoble, where he founded the Dessin/Chantier laboratory, he engaged in extensive research which resulted in several publications, exhibitions, and awards in Brazil and in France, including the title of Chevalier des Arts et des Lettres in 1992. Following his retirement from teaching, Ferro continues to research, write, and paint.
    0 Reacties ·0 aandelen ·0 voorbeeld
  • House of the Future by Alison and Peter Smithson: A Visionary Prototype

    House of the Future | 1956 Photograph
    Exhibited at the 1956 Ideal Home Exhibition in London, the House of the Future by Alison and Peter Smithson is a visionary prototype that challenges conventions of domesticity. Set within the context of post-war Britain, a period marked by austerity and emerging optimism, the project explored the intersection of technology, material innovation, and evolving social dynamics. The Smithsons, already recognized for their theoretical rigor and critical stance toward mainstream modernism, sought to push the boundaries of domestic architecture. In the House of the Future, they offered not merely a dwelling but a speculative environment that engaged with the promise and anxieties of the atomic age.

    House of the Future Technical Information

    Architects: Alison and Peter Smithson
    Location: Ideal Home Exhibition, London, United Kingdom
    Client: Daily Mail Ideal Home Exhibition 
    Gross Area: 90 m2 | 970 Sq. Ft.
    Construction Year: 1956
    Photographs: Canadian Centre for Architecture and Unknown Photographer

    The House of the Future should be a serious attempt to visualize the future of our daily living in the light of modern knowledge and available materials.
    – Alison and Peter Smithson 1

    House of the Future Photographs

    1956 Photograph

    © Klaas Vermaas | 1956 Photograph

    1956 Photograph

    1956 Photograph

    1956 Photograph

    1956 Photograph

    1956 Photograph

    1956 Photograph
    Design Intent and Spatial Organization
    At the heart of the House of the Future lies a radical rethinking of spatial organization. Departing from conventional room hierarchies, the design promotes an open, fluid environment. Walls dissolve into curved partitions and adjustable elements, allowing for flexible reinterpretation of domestic spaces. Sleeping, dining, and social areas are loosely demarcated, creating a dynamic continuity that anticipates the contemporary concept of adaptable, multi-functional living.
    Circulation is conceived as an experiential sequence rather than a rigid path. Visitors enter through an air-lock-like vestibule, an explicit nod to the futuristic theme, and are drawn into an environment that eschews right angles and conventional thresholds. The Smithsons’ emphasis on flexibility and continuous movement within the house reflects their belief that domestic architecture must accommodate the evolving rhythms of life.
    Materiality, Technology, and the Future
    Materiality in the House of the Future embodies the optimism of the era. Plastics and synthetic finishes dominate the interior, forming seamless surfaces that evoke a sense of sterility and futility. Often associated with industrial production, these materials signaled a departure from traditional domestic textures. The smooth, malleable surfaces of the house reinforce the Smithsons’ embrace of prefabrication and modularity.
    Technological integration is a key theme. The design includes built-in appliances and concealed mechanical systems, hinting at a utopian and disquieting automated lifestyle. Bathrooms, kitchens, and sleeping pods are incorporated as interchangeable modules, underscoring the house as a system rather than a static structure. In doing so, the Smithsons prefigured later discourses on the “smart home” and the seamless integration of technology into daily life.
    This material and technological strategy reflects a critical understanding of domestic labor and convenience. The house’s self-contained gadgets and synthetic surfaces suggest a future in which maintenance and domestic chores are minimized, freeing inhabitants to engage with broader cultural and social pursuits.
    Legacy and Influence
    The House of the Future’s influence resonates far beyond its exhibition. It prefigured the radical experimentation of groups like Archigram and the metabolist visions of the 1960s. Its modular approach and embrace of technology also foreshadowed the high-tech movement’s fascination with flexibility and systems thinking.
    While the project was ephemeral, a temporary installation at a trade fair, its theoretical provocations endure. It questioned how architecture could not only house but also anticipate and shape new living forms. Moreover, it crystallized the Smithsons’ ongoing interrogation of architecture’s social role, from their later brutalist housing schemes to urban design theories.
    In retrospect, the House of the Future is less of a resolved design proposal and more of an architectural manifesto. It embodies a critical tension: the optimism of technological progress and the need for architecture to respond to human adaptability and social evolution. As we confront contemporary challenges like climate crisis, digital living, and shifting social paradigms, the Smithsons’ speculative experiment remains an evocative reminder that the architecture of tomorrow must be as thoughtful and provocative as the House of the Future.
    House of the Future Plans

    Axonometric View | © Alison and Peter Smithson via CCA

    Floor Plan | © Alison and Peter Smithson, via CCA

    Floor Plan | © Alison and Peter Smithson, via CCA

    Section | © Alison and Peter Smithson, via CCA

    Section | © Alison and Peter Smithson, via CCA

    Section | © Alison and Peter Smithson, via CCA

    Section | © Alison and Peter Smithson, via CCA

    Section | © Alison and Peter Smithson, via CCA
    House of the Future Image Gallery

    About Alison and Peter Smithson
    Alison and Peter Smithson were British architects and influential thinkers who emerged in the mid-20th century, celebrated for their critical reimagining of modern architecture. Their work, including projects like the House of the Future, the Robin Hood Gardens housing complex, and the Upper Lawn Solar Pavilion, consistently challenged conventional notions of domesticity, urbanism, and materiality. Central to their practice was a belief in architecture’s capacity to shape social life, emphasizing adaptability, flexibility, and the dynamic interactions between buildings and their users. They were pivotal in bridging the gap between post-war modernism and the experimental architectural movements of the 1960s and 1970s.
    Credits and Additional Notes

    Banham, Reyner. Theory and Design in the First Machine Age. MIT Press, 1960.
    Forty, Adrian. Words and Buildings: A Vocabulary of Modern Architecture. Thames & Hudson, 2000.
    Smithson, Alison, and Peter Smithson. The Charged Void: Architecture. Monacelli Press, 2001.
    OASE Journal. “Houses of the Future: 1956 and Beyond.” OASE 75, 2007.
    Vidler, Anthony. Histories of the Immediate Present: Inventing Architectural Modernism. MIT Press, 2008.
    Canadian Centre for Architecture. “House of the Future.”
    #house #future #alison #peter #smithson
    House of the Future by Alison and Peter Smithson: A Visionary Prototype
    House of the Future | 1956 Photograph Exhibited at the 1956 Ideal Home Exhibition in London, the House of the Future by Alison and Peter Smithson is a visionary prototype that challenges conventions of domesticity. Set within the context of post-war Britain, a period marked by austerity and emerging optimism, the project explored the intersection of technology, material innovation, and evolving social dynamics. The Smithsons, already recognized for their theoretical rigor and critical stance toward mainstream modernism, sought to push the boundaries of domestic architecture. In the House of the Future, they offered not merely a dwelling but a speculative environment that engaged with the promise and anxieties of the atomic age. House of the Future Technical Information Architects: Alison and Peter Smithson Location: Ideal Home Exhibition, London, United Kingdom Client: Daily Mail Ideal Home Exhibition  Gross Area: 90 m2 | 970 Sq. Ft. Construction Year: 1956 Photographs: Canadian Centre for Architecture and Unknown Photographer The House of the Future should be a serious attempt to visualize the future of our daily living in the light of modern knowledge and available materials. – Alison and Peter Smithson 1 House of the Future Photographs 1956 Photograph © Klaas Vermaas | 1956 Photograph 1956 Photograph 1956 Photograph 1956 Photograph 1956 Photograph 1956 Photograph 1956 Photograph Design Intent and Spatial Organization At the heart of the House of the Future lies a radical rethinking of spatial organization. Departing from conventional room hierarchies, the design promotes an open, fluid environment. Walls dissolve into curved partitions and adjustable elements, allowing for flexible reinterpretation of domestic spaces. Sleeping, dining, and social areas are loosely demarcated, creating a dynamic continuity that anticipates the contemporary concept of adaptable, multi-functional living. Circulation is conceived as an experiential sequence rather than a rigid path. Visitors enter through an air-lock-like vestibule, an explicit nod to the futuristic theme, and are drawn into an environment that eschews right angles and conventional thresholds. The Smithsons’ emphasis on flexibility and continuous movement within the house reflects their belief that domestic architecture must accommodate the evolving rhythms of life. Materiality, Technology, and the Future Materiality in the House of the Future embodies the optimism of the era. Plastics and synthetic finishes dominate the interior, forming seamless surfaces that evoke a sense of sterility and futility. Often associated with industrial production, these materials signaled a departure from traditional domestic textures. The smooth, malleable surfaces of the house reinforce the Smithsons’ embrace of prefabrication and modularity. Technological integration is a key theme. The design includes built-in appliances and concealed mechanical systems, hinting at a utopian and disquieting automated lifestyle. Bathrooms, kitchens, and sleeping pods are incorporated as interchangeable modules, underscoring the house as a system rather than a static structure. In doing so, the Smithsons prefigured later discourses on the “smart home” and the seamless integration of technology into daily life. This material and technological strategy reflects a critical understanding of domestic labor and convenience. The house’s self-contained gadgets and synthetic surfaces suggest a future in which maintenance and domestic chores are minimized, freeing inhabitants to engage with broader cultural and social pursuits. Legacy and Influence The House of the Future’s influence resonates far beyond its exhibition. It prefigured the radical experimentation of groups like Archigram and the metabolist visions of the 1960s. Its modular approach and embrace of technology also foreshadowed the high-tech movement’s fascination with flexibility and systems thinking. While the project was ephemeral, a temporary installation at a trade fair, its theoretical provocations endure. It questioned how architecture could not only house but also anticipate and shape new living forms. Moreover, it crystallized the Smithsons’ ongoing interrogation of architecture’s social role, from their later brutalist housing schemes to urban design theories. In retrospect, the House of the Future is less of a resolved design proposal and more of an architectural manifesto. It embodies a critical tension: the optimism of technological progress and the need for architecture to respond to human adaptability and social evolution. As we confront contemporary challenges like climate crisis, digital living, and shifting social paradigms, the Smithsons’ speculative experiment remains an evocative reminder that the architecture of tomorrow must be as thoughtful and provocative as the House of the Future. House of the Future Plans Axonometric View | © Alison and Peter Smithson via CCA Floor Plan | © Alison and Peter Smithson, via CCA Floor Plan | © Alison and Peter Smithson, via CCA Section | © Alison and Peter Smithson, via CCA Section | © Alison and Peter Smithson, via CCA Section | © Alison and Peter Smithson, via CCA Section | © Alison and Peter Smithson, via CCA Section | © Alison and Peter Smithson, via CCA House of the Future Image Gallery About Alison and Peter Smithson Alison and Peter Smithson were British architects and influential thinkers who emerged in the mid-20th century, celebrated for their critical reimagining of modern architecture. Their work, including projects like the House of the Future, the Robin Hood Gardens housing complex, and the Upper Lawn Solar Pavilion, consistently challenged conventional notions of domesticity, urbanism, and materiality. Central to their practice was a belief in architecture’s capacity to shape social life, emphasizing adaptability, flexibility, and the dynamic interactions between buildings and their users. They were pivotal in bridging the gap between post-war modernism and the experimental architectural movements of the 1960s and 1970s. Credits and Additional Notes Banham, Reyner. Theory and Design in the First Machine Age. MIT Press, 1960. Forty, Adrian. Words and Buildings: A Vocabulary of Modern Architecture. Thames & Hudson, 2000. Smithson, Alison, and Peter Smithson. The Charged Void: Architecture. Monacelli Press, 2001. OASE Journal. “Houses of the Future: 1956 and Beyond.” OASE 75, 2007. Vidler, Anthony. Histories of the Immediate Present: Inventing Architectural Modernism. MIT Press, 2008. Canadian Centre for Architecture. “House of the Future.” #house #future #alison #peter #smithson
    House of the Future by Alison and Peter Smithson: A Visionary Prototype
    archeyes.com
    House of the Future | 1956 Photograph Exhibited at the 1956 Ideal Home Exhibition in London, the House of the Future by Alison and Peter Smithson is a visionary prototype that challenges conventions of domesticity. Set within the context of post-war Britain, a period marked by austerity and emerging optimism, the project explored the intersection of technology, material innovation, and evolving social dynamics. The Smithsons, already recognized for their theoretical rigor and critical stance toward mainstream modernism, sought to push the boundaries of domestic architecture. In the House of the Future, they offered not merely a dwelling but a speculative environment that engaged with the promise and anxieties of the atomic age. House of the Future Technical Information Architects: Alison and Peter Smithson Location: Ideal Home Exhibition, London, United Kingdom Client: Daily Mail Ideal Home Exhibition  Gross Area: 90 m2 | 970 Sq. Ft. Construction Year: 1956 Photographs: Canadian Centre for Architecture and Unknown Photographer The House of the Future should be a serious attempt to visualize the future of our daily living in the light of modern knowledge and available materials. – Alison and Peter Smithson 1 House of the Future Photographs 1956 Photograph © Klaas Vermaas | 1956 Photograph 1956 Photograph 1956 Photograph 1956 Photograph 1956 Photograph 1956 Photograph 1956 Photograph Design Intent and Spatial Organization At the heart of the House of the Future lies a radical rethinking of spatial organization. Departing from conventional room hierarchies, the design promotes an open, fluid environment. Walls dissolve into curved partitions and adjustable elements, allowing for flexible reinterpretation of domestic spaces. Sleeping, dining, and social areas are loosely demarcated, creating a dynamic continuity that anticipates the contemporary concept of adaptable, multi-functional living. Circulation is conceived as an experiential sequence rather than a rigid path. Visitors enter through an air-lock-like vestibule, an explicit nod to the futuristic theme, and are drawn into an environment that eschews right angles and conventional thresholds. The Smithsons’ emphasis on flexibility and continuous movement within the house reflects their belief that domestic architecture must accommodate the evolving rhythms of life. Materiality, Technology, and the Future Materiality in the House of the Future embodies the optimism of the era. Plastics and synthetic finishes dominate the interior, forming seamless surfaces that evoke a sense of sterility and futility. Often associated with industrial production, these materials signaled a departure from traditional domestic textures. The smooth, malleable surfaces of the house reinforce the Smithsons’ embrace of prefabrication and modularity. Technological integration is a key theme. The design includes built-in appliances and concealed mechanical systems, hinting at a utopian and disquieting automated lifestyle. Bathrooms, kitchens, and sleeping pods are incorporated as interchangeable modules, underscoring the house as a system rather than a static structure. In doing so, the Smithsons prefigured later discourses on the “smart home” and the seamless integration of technology into daily life. This material and technological strategy reflects a critical understanding of domestic labor and convenience. The house’s self-contained gadgets and synthetic surfaces suggest a future in which maintenance and domestic chores are minimized, freeing inhabitants to engage with broader cultural and social pursuits. Legacy and Influence The House of the Future’s influence resonates far beyond its exhibition. It prefigured the radical experimentation of groups like Archigram and the metabolist visions of the 1960s. Its modular approach and embrace of technology also foreshadowed the high-tech movement’s fascination with flexibility and systems thinking. While the project was ephemeral, a temporary installation at a trade fair, its theoretical provocations endure. It questioned how architecture could not only house but also anticipate and shape new living forms. Moreover, it crystallized the Smithsons’ ongoing interrogation of architecture’s social role, from their later brutalist housing schemes to urban design theories. In retrospect, the House of the Future is less of a resolved design proposal and more of an architectural manifesto. It embodies a critical tension: the optimism of technological progress and the need for architecture to respond to human adaptability and social evolution. As we confront contemporary challenges like climate crisis, digital living, and shifting social paradigms, the Smithsons’ speculative experiment remains an evocative reminder that the architecture of tomorrow must be as thoughtful and provocative as the House of the Future. House of the Future Plans Axonometric View | © Alison and Peter Smithson via CCA Floor Plan | © Alison and Peter Smithson, via CCA Floor Plan | © Alison and Peter Smithson, via CCA Section | © Alison and Peter Smithson, via CCA Section | © Alison and Peter Smithson, via CCA Section | © Alison and Peter Smithson, via CCA Section | © Alison and Peter Smithson, via CCA Section | © Alison and Peter Smithson, via CCA House of the Future Image Gallery About Alison and Peter Smithson Alison and Peter Smithson were British architects and influential thinkers who emerged in the mid-20th century, celebrated for their critical reimagining of modern architecture. Their work, including projects like the House of the Future, the Robin Hood Gardens housing complex, and the Upper Lawn Solar Pavilion, consistently challenged conventional notions of domesticity, urbanism, and materiality. Central to their practice was a belief in architecture’s capacity to shape social life, emphasizing adaptability, flexibility, and the dynamic interactions between buildings and their users. They were pivotal in bridging the gap between post-war modernism and the experimental architectural movements of the 1960s and 1970s. Credits and Additional Notes Banham, Reyner. Theory and Design in the First Machine Age. MIT Press, 1960. Forty, Adrian. Words and Buildings: A Vocabulary of Modern Architecture. Thames & Hudson, 2000. Smithson, Alison, and Peter Smithson. The Charged Void: Architecture. Monacelli Press, 2001. OASE Journal. “Houses of the Future: 1956 and Beyond.” OASE 75, 2007. Vidler, Anthony. Histories of the Immediate Present: Inventing Architectural Modernism. MIT Press, 2008. Canadian Centre for Architecture (CCA). “House of the Future.”
    Like
    Love
    Wow
    Angry
    Sad
    143
    · 0 Reacties ·0 aandelen ·0 voorbeeld
  • HOLLYWOOD VFX TOOLS FOR SPACE EXPLORATION

    By CHRIS McGOWAN

    This image of Jupiter from NASA’s James Webb Space Telescope’s NIRCamshows stunning details of the majestic planet in infrared light.Special effects have been used for decades to depict space exploration, from visits to planets and moons to zero gravity and spaceships – one need only think of the landmark 2001: A Space Odyssey. Since that era, visual effects have increasingly grown in realism and importance. VFX have been used for entertainment and for scientific purposes, outreach to the public and astronaut training in virtual reality. Compelling images and videos can bring data to life. NASA’s Scientific Visualization Studioproduces visualizations, animations and images to help scientists tell stories of their research and make science more approachable and engaging.
    A.J. Christensen is a senior visualization designer for the NASA Scientific Visualization Studioat the Goddard Space Flight Center in Greenbelt, Maryland. There, he develops data visualization techniques and designs data-driven imagery for scientific analysis and public outreach using Hollywood visual effects tools, according to NASA. SVS visualizations feature datasets from Earth-and space-based instrumentation, scientific supercomputer models and physical statistical distributions that have been analyzed and processed by computational scientists. Christensen’s specialties include working with 3D volumetric data, using the procedural cinematic software Houdini and science topics in Heliophysics, Geophysics and Astrophysics. He previously worked at the National Center for Supercomputing Applications’ Advanced Visualization Lab where he worked on more than a dozen science documentary full-dome films as well as the IMAX films Hubble 3D and A Beautiful Planet – and he worked at DNEG on the movie Interstellar, which won the 2015 Best Visual Effects Academy Award.

    This global map of CO2 was created by NASA’s Scientific Visualization Studio using a model called GEOS, short for the Goddard Earth Observing System. GEOS is a high-resolution weather reanalysis model, powered by supercomputers, that is used to represent what was happening in the atmosphere.“The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video.”
    —A.J. Christensen, Senior Visualization Designer, NASA Scientific Visualization StudioAbout his work at NASA SVS, Christensen comments, “The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video. This past year we were part of NASA’s total eclipse outreach efforts, we participated in all the major earth science and astronomy conferences, we launched a public exhibition at the Smithsonian Museum of Natural History called the Earth Information Center, and we posted hundreds of new visualizations to our publicly accessible website: svs.gsfc.nasa.gov.”

    This is the ‘beauty shot version’ of Perpetual Ocean 2: Western Boundary Currents. The visualization starts with a rotating globe showing ocean currents. The colors used to color the flow in this version were chosen to provide a pleasing look.The Gulf Stream and connected currents.Venus, our nearby “sister” planet, beckons today as a compelling target for exploration that may connect the objects in our own solar system to those discovered around nearby stars.WORKING WITH DATA
    While Christensen is interpreting the data from active spacecraft and making it usable in different forms, such as for science and outreach, he notes, “It’s not just spacecraft that collect data. NASA maintains or monitors instruments on Earth too – on land, in the oceans and in the air. And to be precise, there are robots wandering around Mars that are collecting data, too.”
    He continues, “Sometimes the data comes to our team as raw telescope imagery, sometimes we get it as a data product that a scientist has already analyzed and extracted meaning from, and sometimes various sensor data is used to drive computational models and we work with the models’ resulting output.”

    Jupiter’s moon Europa may have life in a vast ocean beneath its icy surface.HOUDINI AND OTHER TOOLS
    “Data visualization means a lot of different things to different people, but many people on our team interpret it as a form of filmmaking,” Christensen says. “We are very inspired by the approach to visual storytelling that Hollywood uses, and we use tools that are standard for Hollywood VFX. Many professionals in our area – the visualization of 3D scientific data – were previously using other animation tools but have discovered that Houdini is the most capable of understanding and manipulating unusual data, so there has been major movement toward Houdini over the past decade.”

    Satellite imagery from NASA’s Solar Dynamics Observatoryshows the Sun in ultraviolet light colorized in light brown. Seen in ultraviolet light, the dark patches on the Sun are known as coronal holes and are regions where fast solar wind gushes out into space.Christensen explains, “We have always worked with scientific software as well – sometimes there’s only one software tool in existence to interpret a particular kind of scientific data. More often than not, scientific software does not have a GUI, so we’ve had to become proficient at learning new coding environments very quickly. IDL and Python are the generic data manipulation environments we use when something is too complicated or oversized for Houdini, but there are lots of alternatives out there. Typically, we use these tools to get the data into a format that Houdini can interpret, and then we use Houdini to do our shading, lighting and camera design, and seamlessly blend different datasets together.”

    While cruising around Saturn in early October 2004, Cassini captured a series of images that have been composed into this large global natural color view of Saturn and its rings. This grand mosaic consists of 126 images acquired in a tile-like fashion, covering one end of Saturn’s rings to the other and the entire planet in between.The black hole Gargantua and the surrounding accretion disc from the 2014 movie Interstellar.Another visualization of the black hole Gargantua.INTERSTELLAR & GARGANTUA
    Christensen recalls working for DNEG on Interstellar. “When I first started at DNEG, they asked me to work on the giant waves on Miller’s ocean planet. About a week in, my manager took me into the hall and said, ‘I was looking at your reel and saw all this astronomy stuff. We’re working on another sequence with an accretion disk around a black hole that I’m wondering if we should put you on.’ And I said, ‘Oh yeah, I’ve done lots of accretion disks.’ So, for the rest of my time on the show, I was working on the black hole team.”
    He adds, “There are a lot of people in my community that would be hesitant to label any big-budget movie sequence as a scientific visualization. The typical assumption is that for a Hollywood movie, no one cares about accuracy as long as it looks good. Guardians of the Galaxy makes it seem like space is positively littered with nebulae, and Star Wars makes it seem like asteroids travel in herds. But the black hole Gargantua in Interstellar is a good case for being called a visualization. The imagery you see in the movie is the direct result of a collaboration with an expert scientist, Dr. Kip Thorne, working with the DNEG research team using the actual Einstein equations that describe the gravity around a black hole.”

    Thorne is a Nobel Prize-winning theoretical physicist who taught at Caltech for many years. He has reached wide audiences with his books and presentations on black holes, time travel and wormholes on PBS and BBC shows. Christensen comments, “You can make the argument that some of the complexity around what a black hole actually looks like was discarded for the film, and they admit as much in the research paper that was published after the movie came out. But our team at NASA does that same thing. There is no such thing as an objectively ‘true’ scientific image – you always have to make aesthetic decisions around whether the image tells the science story, and often it makes more sense to omit information to clarify what’s important. Ultimately, Gargantua taught a whole lot of people something new about science, and that’s what a good scientific visualization aims to do.”

    The SVS produces an annual visualization of the Moon’s phase and libration comprising 8,760 hourly renderings of its precise size, orientation and illumination.FURTHER CHALLENGES
    The sheer size of the data often encountered by Christensen and his peers is a challenge. “I’m currently working with a dataset that is 400GB per timestep. It’s so big that I don’t even want to move it from one file server to another. So, then I have to make decisions about which data attributes to keep and which to discard, whether there’s a region of the data that I can cull or downsample, and I have to experiment with data compression schemes that might require me to entirely re-design the pipeline I’m using for Houdini. Of course, if I get rid of too much information, it becomes very resource-intensive to recompute everything, but if I don’t get rid of enough, then my design process becomes agonizingly slow.”
    SVS also works closely with its NASA partner groups Conceptual Image Laband Goddard Media Studiosto publish a diverse array of content. Conceptual Image Lab focuses more on the artistic side of things – producing high-fidelity renders using film animation and visual design techniques, according to NASA. Where the SVS primarily focuses on making data-based visualizations, CIL puts more emphasis on conceptual visualizations – producing animations featuring NASA spacecraft, planetary observations and simulations, according to NASA. Goddard Media Studios, on the other hand, is more focused towards public outreach – producing interviews, TV programs and documentaries. GMS continues to be the main producers behind NASA TV, and as such, much of their content is aimed towards the general public.

    An impact crater on the moon.Image of Mars showing a partly shadowed Olympus Mons toward the upper left of the image.Mars. Hellas Basin can be seen in the lower right portion of the image.Mars slightly tilted to show the Martian North Pole.Christensen notes, “One of the more unique challenges in this field is one of bringing people from very different backgrounds to agree on a common outcome. I work on teams with scientists, communicators and technologists, and we all have different communities we’re trying to satisfy. For instance, communicators are generally trying to simplify animations so their learning goal is clear, but scientists will insist that we add text and annotations on top of the video to eliminate ambiguity and avoid misinterpretations. Often, the technologist will have to say we can’t zoom in or look at the data in a certain way because it will show the data boundaries or data resolution limits. Every shot is a negotiation, but in trying to compromise, we often push the boundaries of what has been done before, which is exciting.”
    #hollywood #vfx #tools #space #exploration
    HOLLYWOOD VFX TOOLS FOR SPACE EXPLORATION
    By CHRIS McGOWAN This image of Jupiter from NASA’s James Webb Space Telescope’s NIRCamshows stunning details of the majestic planet in infrared light.Special effects have been used for decades to depict space exploration, from visits to planets and moons to zero gravity and spaceships – one need only think of the landmark 2001: A Space Odyssey. Since that era, visual effects have increasingly grown in realism and importance. VFX have been used for entertainment and for scientific purposes, outreach to the public and astronaut training in virtual reality. Compelling images and videos can bring data to life. NASA’s Scientific Visualization Studioproduces visualizations, animations and images to help scientists tell stories of their research and make science more approachable and engaging. A.J. Christensen is a senior visualization designer for the NASA Scientific Visualization Studioat the Goddard Space Flight Center in Greenbelt, Maryland. There, he develops data visualization techniques and designs data-driven imagery for scientific analysis and public outreach using Hollywood visual effects tools, according to NASA. SVS visualizations feature datasets from Earth-and space-based instrumentation, scientific supercomputer models and physical statistical distributions that have been analyzed and processed by computational scientists. Christensen’s specialties include working with 3D volumetric data, using the procedural cinematic software Houdini and science topics in Heliophysics, Geophysics and Astrophysics. He previously worked at the National Center for Supercomputing Applications’ Advanced Visualization Lab where he worked on more than a dozen science documentary full-dome films as well as the IMAX films Hubble 3D and A Beautiful Planet – and he worked at DNEG on the movie Interstellar, which won the 2015 Best Visual Effects Academy Award. This global map of CO2 was created by NASA’s Scientific Visualization Studio using a model called GEOS, short for the Goddard Earth Observing System. GEOS is a high-resolution weather reanalysis model, powered by supercomputers, that is used to represent what was happening in the atmosphere.“The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video.” —A.J. Christensen, Senior Visualization Designer, NASA Scientific Visualization StudioAbout his work at NASA SVS, Christensen comments, “The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video. This past year we were part of NASA’s total eclipse outreach efforts, we participated in all the major earth science and astronomy conferences, we launched a public exhibition at the Smithsonian Museum of Natural History called the Earth Information Center, and we posted hundreds of new visualizations to our publicly accessible website: svs.gsfc.nasa.gov.” This is the ‘beauty shot version’ of Perpetual Ocean 2: Western Boundary Currents. The visualization starts with a rotating globe showing ocean currents. The colors used to color the flow in this version were chosen to provide a pleasing look.The Gulf Stream and connected currents.Venus, our nearby “sister” planet, beckons today as a compelling target for exploration that may connect the objects in our own solar system to those discovered around nearby stars.WORKING WITH DATA While Christensen is interpreting the data from active spacecraft and making it usable in different forms, such as for science and outreach, he notes, “It’s not just spacecraft that collect data. NASA maintains or monitors instruments on Earth too – on land, in the oceans and in the air. And to be precise, there are robots wandering around Mars that are collecting data, too.” He continues, “Sometimes the data comes to our team as raw telescope imagery, sometimes we get it as a data product that a scientist has already analyzed and extracted meaning from, and sometimes various sensor data is used to drive computational models and we work with the models’ resulting output.” Jupiter’s moon Europa may have life in a vast ocean beneath its icy surface.HOUDINI AND OTHER TOOLS “Data visualization means a lot of different things to different people, but many people on our team interpret it as a form of filmmaking,” Christensen says. “We are very inspired by the approach to visual storytelling that Hollywood uses, and we use tools that are standard for Hollywood VFX. Many professionals in our area – the visualization of 3D scientific data – were previously using other animation tools but have discovered that Houdini is the most capable of understanding and manipulating unusual data, so there has been major movement toward Houdini over the past decade.” Satellite imagery from NASA’s Solar Dynamics Observatoryshows the Sun in ultraviolet light colorized in light brown. Seen in ultraviolet light, the dark patches on the Sun are known as coronal holes and are regions where fast solar wind gushes out into space.Christensen explains, “We have always worked with scientific software as well – sometimes there’s only one software tool in existence to interpret a particular kind of scientific data. More often than not, scientific software does not have a GUI, so we’ve had to become proficient at learning new coding environments very quickly. IDL and Python are the generic data manipulation environments we use when something is too complicated or oversized for Houdini, but there are lots of alternatives out there. Typically, we use these tools to get the data into a format that Houdini can interpret, and then we use Houdini to do our shading, lighting and camera design, and seamlessly blend different datasets together.” While cruising around Saturn in early October 2004, Cassini captured a series of images that have been composed into this large global natural color view of Saturn and its rings. This grand mosaic consists of 126 images acquired in a tile-like fashion, covering one end of Saturn’s rings to the other and the entire planet in between.The black hole Gargantua and the surrounding accretion disc from the 2014 movie Interstellar.Another visualization of the black hole Gargantua.INTERSTELLAR & GARGANTUA Christensen recalls working for DNEG on Interstellar. “When I first started at DNEG, they asked me to work on the giant waves on Miller’s ocean planet. About a week in, my manager took me into the hall and said, ‘I was looking at your reel and saw all this astronomy stuff. We’re working on another sequence with an accretion disk around a black hole that I’m wondering if we should put you on.’ And I said, ‘Oh yeah, I’ve done lots of accretion disks.’ So, for the rest of my time on the show, I was working on the black hole team.” He adds, “There are a lot of people in my community that would be hesitant to label any big-budget movie sequence as a scientific visualization. The typical assumption is that for a Hollywood movie, no one cares about accuracy as long as it looks good. Guardians of the Galaxy makes it seem like space is positively littered with nebulae, and Star Wars makes it seem like asteroids travel in herds. But the black hole Gargantua in Interstellar is a good case for being called a visualization. The imagery you see in the movie is the direct result of a collaboration with an expert scientist, Dr. Kip Thorne, working with the DNEG research team using the actual Einstein equations that describe the gravity around a black hole.” Thorne is a Nobel Prize-winning theoretical physicist who taught at Caltech for many years. He has reached wide audiences with his books and presentations on black holes, time travel and wormholes on PBS and BBC shows. Christensen comments, “You can make the argument that some of the complexity around what a black hole actually looks like was discarded for the film, and they admit as much in the research paper that was published after the movie came out. But our team at NASA does that same thing. There is no such thing as an objectively ‘true’ scientific image – you always have to make aesthetic decisions around whether the image tells the science story, and often it makes more sense to omit information to clarify what’s important. Ultimately, Gargantua taught a whole lot of people something new about science, and that’s what a good scientific visualization aims to do.” The SVS produces an annual visualization of the Moon’s phase and libration comprising 8,760 hourly renderings of its precise size, orientation and illumination.FURTHER CHALLENGES The sheer size of the data often encountered by Christensen and his peers is a challenge. “I’m currently working with a dataset that is 400GB per timestep. It’s so big that I don’t even want to move it from one file server to another. So, then I have to make decisions about which data attributes to keep and which to discard, whether there’s a region of the data that I can cull or downsample, and I have to experiment with data compression schemes that might require me to entirely re-design the pipeline I’m using for Houdini. Of course, if I get rid of too much information, it becomes very resource-intensive to recompute everything, but if I don’t get rid of enough, then my design process becomes agonizingly slow.” SVS also works closely with its NASA partner groups Conceptual Image Laband Goddard Media Studiosto publish a diverse array of content. Conceptual Image Lab focuses more on the artistic side of things – producing high-fidelity renders using film animation and visual design techniques, according to NASA. Where the SVS primarily focuses on making data-based visualizations, CIL puts more emphasis on conceptual visualizations – producing animations featuring NASA spacecraft, planetary observations and simulations, according to NASA. Goddard Media Studios, on the other hand, is more focused towards public outreach – producing interviews, TV programs and documentaries. GMS continues to be the main producers behind NASA TV, and as such, much of their content is aimed towards the general public. An impact crater on the moon.Image of Mars showing a partly shadowed Olympus Mons toward the upper left of the image.Mars. Hellas Basin can be seen in the lower right portion of the image.Mars slightly tilted to show the Martian North Pole.Christensen notes, “One of the more unique challenges in this field is one of bringing people from very different backgrounds to agree on a common outcome. I work on teams with scientists, communicators and technologists, and we all have different communities we’re trying to satisfy. For instance, communicators are generally trying to simplify animations so their learning goal is clear, but scientists will insist that we add text and annotations on top of the video to eliminate ambiguity and avoid misinterpretations. Often, the technologist will have to say we can’t zoom in or look at the data in a certain way because it will show the data boundaries or data resolution limits. Every shot is a negotiation, but in trying to compromise, we often push the boundaries of what has been done before, which is exciting.” #hollywood #vfx #tools #space #exploration
    HOLLYWOOD VFX TOOLS FOR SPACE EXPLORATION
    www.vfxvoice.com
    By CHRIS McGOWAN This image of Jupiter from NASA’s James Webb Space Telescope’s NIRCam (Near-Infrared Camera) shows stunning details of the majestic planet in infrared light. (Image courtesy of NASA, ESA and CSA) Special effects have been used for decades to depict space exploration, from visits to planets and moons to zero gravity and spaceships – one need only think of the landmark 2001: A Space Odyssey (1968). Since that era, visual effects have increasingly grown in realism and importance. VFX have been used for entertainment and for scientific purposes, outreach to the public and astronaut training in virtual reality. Compelling images and videos can bring data to life. NASA’s Scientific Visualization Studio (SVS) produces visualizations, animations and images to help scientists tell stories of their research and make science more approachable and engaging. A.J. Christensen is a senior visualization designer for the NASA Scientific Visualization Studio (SVS) at the Goddard Space Flight Center in Greenbelt, Maryland. There, he develops data visualization techniques and designs data-driven imagery for scientific analysis and public outreach using Hollywood visual effects tools, according to NASA. SVS visualizations feature datasets from Earth-and space-based instrumentation, scientific supercomputer models and physical statistical distributions that have been analyzed and processed by computational scientists. Christensen’s specialties include working with 3D volumetric data, using the procedural cinematic software Houdini and science topics in Heliophysics, Geophysics and Astrophysics. He previously worked at the National Center for Supercomputing Applications’ Advanced Visualization Lab where he worked on more than a dozen science documentary full-dome films as well as the IMAX films Hubble 3D and A Beautiful Planet – and he worked at DNEG on the movie Interstellar, which won the 2015 Best Visual Effects Academy Award. This global map of CO2 was created by NASA’s Scientific Visualization Studio using a model called GEOS, short for the Goddard Earth Observing System. GEOS is a high-resolution weather reanalysis model, powered by supercomputers, that is used to represent what was happening in the atmosphere. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) “The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video.” —A.J. Christensen, Senior Visualization Designer, NASA Scientific Visualization Studio (SVS) About his work at NASA SVS, Christensen comments, “The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video. This past year we were part of NASA’s total eclipse outreach efforts, we participated in all the major earth science and astronomy conferences, we launched a public exhibition at the Smithsonian Museum of Natural History called the Earth Information Center, and we posted hundreds of new visualizations to our publicly accessible website: svs.gsfc.nasa.gov.” This is the ‘beauty shot version’ of Perpetual Ocean 2: Western Boundary Currents. The visualization starts with a rotating globe showing ocean currents. The colors used to color the flow in this version were chosen to provide a pleasing look. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) The Gulf Stream and connected currents. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Venus, our nearby “sister” planet, beckons today as a compelling target for exploration that may connect the objects in our own solar system to those discovered around nearby stars. (Image courtesy of NASA’s Goddard Space Flight Center) WORKING WITH DATA While Christensen is interpreting the data from active spacecraft and making it usable in different forms, such as for science and outreach, he notes, “It’s not just spacecraft that collect data. NASA maintains or monitors instruments on Earth too – on land, in the oceans and in the air. And to be precise, there are robots wandering around Mars that are collecting data, too.” He continues, “Sometimes the data comes to our team as raw telescope imagery, sometimes we get it as a data product that a scientist has already analyzed and extracted meaning from, and sometimes various sensor data is used to drive computational models and we work with the models’ resulting output.” Jupiter’s moon Europa may have life in a vast ocean beneath its icy surface. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) HOUDINI AND OTHER TOOLS “Data visualization means a lot of different things to different people, but many people on our team interpret it as a form of filmmaking,” Christensen says. “We are very inspired by the approach to visual storytelling that Hollywood uses, and we use tools that are standard for Hollywood VFX. Many professionals in our area – the visualization of 3D scientific data – were previously using other animation tools but have discovered that Houdini is the most capable of understanding and manipulating unusual data, so there has been major movement toward Houdini over the past decade.” Satellite imagery from NASA’s Solar Dynamics Observatory (SDO) shows the Sun in ultraviolet light colorized in light brown. Seen in ultraviolet light, the dark patches on the Sun are known as coronal holes and are regions where fast solar wind gushes out into space. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Christensen explains, “We have always worked with scientific software as well – sometimes there’s only one software tool in existence to interpret a particular kind of scientific data. More often than not, scientific software does not have a GUI, so we’ve had to become proficient at learning new coding environments very quickly. IDL and Python are the generic data manipulation environments we use when something is too complicated or oversized for Houdini, but there are lots of alternatives out there. Typically, we use these tools to get the data into a format that Houdini can interpret, and then we use Houdini to do our shading, lighting and camera design, and seamlessly blend different datasets together.” While cruising around Saturn in early October 2004, Cassini captured a series of images that have been composed into this large global natural color view of Saturn and its rings. This grand mosaic consists of 126 images acquired in a tile-like fashion, covering one end of Saturn’s rings to the other and the entire planet in between. (Image courtesy of ASA/JPL/Space Science Institute) The black hole Gargantua and the surrounding accretion disc from the 2014 movie Interstellar. (Image courtesy of DNEG and Paramount Pictures) Another visualization of the black hole Gargantua. (Image courtesy of DNEG and Paramount Pictures) INTERSTELLAR & GARGANTUA Christensen recalls working for DNEG on Interstellar (2014). “When I first started at DNEG, they asked me to work on the giant waves on Miller’s ocean planet [in the film]. About a week in, my manager took me into the hall and said, ‘I was looking at your reel and saw all this astronomy stuff. We’re working on another sequence with an accretion disk around a black hole that I’m wondering if we should put you on.’ And I said, ‘Oh yeah, I’ve done lots of accretion disks.’ So, for the rest of my time on the show, I was working on the black hole team.” He adds, “There are a lot of people in my community that would be hesitant to label any big-budget movie sequence as a scientific visualization. The typical assumption is that for a Hollywood movie, no one cares about accuracy as long as it looks good. Guardians of the Galaxy makes it seem like space is positively littered with nebulae, and Star Wars makes it seem like asteroids travel in herds. But the black hole Gargantua in Interstellar is a good case for being called a visualization. The imagery you see in the movie is the direct result of a collaboration with an expert scientist, Dr. Kip Thorne, working with the DNEG research team using the actual Einstein equations that describe the gravity around a black hole.” Thorne is a Nobel Prize-winning theoretical physicist who taught at Caltech for many years. He has reached wide audiences with his books and presentations on black holes, time travel and wormholes on PBS and BBC shows. Christensen comments, “You can make the argument that some of the complexity around what a black hole actually looks like was discarded for the film, and they admit as much in the research paper that was published after the movie came out. But our team at NASA does that same thing. There is no such thing as an objectively ‘true’ scientific image – you always have to make aesthetic decisions around whether the image tells the science story, and often it makes more sense to omit information to clarify what’s important. Ultimately, Gargantua taught a whole lot of people something new about science, and that’s what a good scientific visualization aims to do.” The SVS produces an annual visualization of the Moon’s phase and libration comprising 8,760 hourly renderings of its precise size, orientation and illumination. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) FURTHER CHALLENGES The sheer size of the data often encountered by Christensen and his peers is a challenge. “I’m currently working with a dataset that is 400GB per timestep. It’s so big that I don’t even want to move it from one file server to another. So, then I have to make decisions about which data attributes to keep and which to discard, whether there’s a region of the data that I can cull or downsample, and I have to experiment with data compression schemes that might require me to entirely re-design the pipeline I’m using for Houdini. Of course, if I get rid of too much information, it becomes very resource-intensive to recompute everything, but if I don’t get rid of enough, then my design process becomes agonizingly slow.” SVS also works closely with its NASA partner groups Conceptual Image Lab (CIL) and Goddard Media Studios (GMS) to publish a diverse array of content. Conceptual Image Lab focuses more on the artistic side of things – producing high-fidelity renders using film animation and visual design techniques, according to NASA. Where the SVS primarily focuses on making data-based visualizations, CIL puts more emphasis on conceptual visualizations – producing animations featuring NASA spacecraft, planetary observations and simulations, according to NASA. Goddard Media Studios, on the other hand, is more focused towards public outreach – producing interviews, TV programs and documentaries. GMS continues to be the main producers behind NASA TV, and as such, much of their content is aimed towards the general public. An impact crater on the moon. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Image of Mars showing a partly shadowed Olympus Mons toward the upper left of the image. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Mars. Hellas Basin can be seen in the lower right portion of the image. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Mars slightly tilted to show the Martian North Pole. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Christensen notes, “One of the more unique challenges in this field is one of bringing people from very different backgrounds to agree on a common outcome. I work on teams with scientists, communicators and technologists, and we all have different communities we’re trying to satisfy. For instance, communicators are generally trying to simplify animations so their learning goal is clear, but scientists will insist that we add text and annotations on top of the video to eliminate ambiguity and avoid misinterpretations. Often, the technologist will have to say we can’t zoom in or look at the data in a certain way because it will show the data boundaries or data resolution limits. Every shot is a negotiation, but in trying to compromise, we often push the boundaries of what has been done before, which is exciting.”
    Like
    Love
    Wow
    Angry
    Sad
    144
    · 0 Reacties ·0 aandelen ·0 voorbeeld
  • Java turns 30 and shows no signs of slowing down

    The big picture: Java stands as one of the enduring pillars of the software world. The programming language was released by Sun Microsystems on May 23, 1995, and so far has weathered the shifting tides of technology, outlasting many of its rivals and adapting to new eras of computing.
    Java's origins trace back to the early 1990s, when a team at Sun Microsystems led by James Gosling set out to develop a language for interactive television and embedded devices. Initially dubbed "Oak," the project aimed to simplify application development across a range of devices. Gosling famously described Java as "C++ without the guns and knives," a nod to its safer and more streamlined syntax.
    Gosling, who remains closely associated with Java to this day, described the language as "C++ without guns and knives," a nod to its simpler, safer syntax compared to its predecessor.
    As the World Wide Web began to take off, Java's focus shifted from consumer electronics to internet applications. The language's defining feature – platform independence – meant that code could be compiled into bytecode and executed on any device with a Java Virtual Machine.
    This "write once, run anywhere" capability was groundbreaking, allowing software to run across different operating systems with minimal modification.
    Java quickly gained traction with web applets and, soon after, enterprise applications. Its rapid rise prompted competitors to react. Microsoft introduced Visual J++, a Java-compatible language for Windows, but the product was discontinued after a legal dispute with Sun over non-compliance with Java's standards.

    Many universities and colleges offer dedicated Java programming courses and certificates. It is often an introductory language in computer science curricula because of its object-oriented structure.
    The late 1990s and early 2000s saw significant evolution in Java's capabilities. Features like JavaBeans, JDBC, and the Swing GUI library broadened its use. The language was eventually split into multiple editions – Standard, Enterprise, and Micro– tailored for desktop, server, and mobile development, respectively.
    // Related Stories

    In 2006, Sun made a pivotal move by open-sourcing Java, releasing the OpenJDK under the GNU General Public License. This move helped cement Java's role in the open-source community and made it even more accessible to developers worldwide.
    Java's stewardship changed in 2010 when Oracle acquired Sun Microsystems. While the core implementation of Java remained open source, Oracle introduced licensing changes in later years that led some organizations to explore alternatives such as OpenJDK builds from other vendors.
    Java's influence on enterprise software has been profound. Its robust ecosystem, including frameworks like Spring Boot and Jakarta EE, has made it a go-to choice for organizations seeking reliability and scalability. The language's stability and backward compatibility have ensured that even as trends come and go, Java remains a constant in the back offices of countless businesses.

    James Gosling remains closely associated with Java to this day.
    According to industry experts, Java's longevity stems from its adaptability. Brian Fox, CTO of Sonatype, told The Register that Java has endured through changing paradigms, from early web applets to today's cloud-native applications. "Java has outlasted trends, rival languages, and shifting paradigms. It paved the way for open source to enter the enterprise. And, arguably, the enterprise never looked back."
    While it may no longer be the flashiest programming language around, Java remains one of the most important. It powers enterprise systems, big data platforms, and cloud-native architectures alike. Despite the rise of languages like Python and JavaScript, Java consistently ranks among the most-used programming languages in industry surveys.
    As Java enters its fourth decade, it shows no signs of fading away. Instead, it stands as a testament to the enduring value of reliability, adaptability, and a vibrant developer community – a language that, for many, is as essential today as it was in 1995.
    #java #turns #shows #signs #slowing
    Java turns 30 and shows no signs of slowing down
    The big picture: Java stands as one of the enduring pillars of the software world. The programming language was released by Sun Microsystems on May 23, 1995, and so far has weathered the shifting tides of technology, outlasting many of its rivals and adapting to new eras of computing. Java's origins trace back to the early 1990s, when a team at Sun Microsystems led by James Gosling set out to develop a language for interactive television and embedded devices. Initially dubbed "Oak," the project aimed to simplify application development across a range of devices. Gosling famously described Java as "C++ without the guns and knives," a nod to its safer and more streamlined syntax. Gosling, who remains closely associated with Java to this day, described the language as "C++ without guns and knives," a nod to its simpler, safer syntax compared to its predecessor. As the World Wide Web began to take off, Java's focus shifted from consumer electronics to internet applications. The language's defining feature – platform independence – meant that code could be compiled into bytecode and executed on any device with a Java Virtual Machine. This "write once, run anywhere" capability was groundbreaking, allowing software to run across different operating systems with minimal modification. Java quickly gained traction with web applets and, soon after, enterprise applications. Its rapid rise prompted competitors to react. Microsoft introduced Visual J++, a Java-compatible language for Windows, but the product was discontinued after a legal dispute with Sun over non-compliance with Java's standards. Many universities and colleges offer dedicated Java programming courses and certificates. It is often an introductory language in computer science curricula because of its object-oriented structure. The late 1990s and early 2000s saw significant evolution in Java's capabilities. Features like JavaBeans, JDBC, and the Swing GUI library broadened its use. The language was eventually split into multiple editions – Standard, Enterprise, and Micro– tailored for desktop, server, and mobile development, respectively. // Related Stories In 2006, Sun made a pivotal move by open-sourcing Java, releasing the OpenJDK under the GNU General Public License. This move helped cement Java's role in the open-source community and made it even more accessible to developers worldwide. Java's stewardship changed in 2010 when Oracle acquired Sun Microsystems. While the core implementation of Java remained open source, Oracle introduced licensing changes in later years that led some organizations to explore alternatives such as OpenJDK builds from other vendors. Java's influence on enterprise software has been profound. Its robust ecosystem, including frameworks like Spring Boot and Jakarta EE, has made it a go-to choice for organizations seeking reliability and scalability. The language's stability and backward compatibility have ensured that even as trends come and go, Java remains a constant in the back offices of countless businesses. James Gosling remains closely associated with Java to this day. According to industry experts, Java's longevity stems from its adaptability. Brian Fox, CTO of Sonatype, told The Register that Java has endured through changing paradigms, from early web applets to today's cloud-native applications. "Java has outlasted trends, rival languages, and shifting paradigms. It paved the way for open source to enter the enterprise. And, arguably, the enterprise never looked back." While it may no longer be the flashiest programming language around, Java remains one of the most important. It powers enterprise systems, big data platforms, and cloud-native architectures alike. Despite the rise of languages like Python and JavaScript, Java consistently ranks among the most-used programming languages in industry surveys. As Java enters its fourth decade, it shows no signs of fading away. Instead, it stands as a testament to the enduring value of reliability, adaptability, and a vibrant developer community – a language that, for many, is as essential today as it was in 1995. #java #turns #shows #signs #slowing
    Java turns 30 and shows no signs of slowing down
    www.techspot.com
    The big picture: Java stands as one of the enduring pillars of the software world. The programming language was released by Sun Microsystems on May 23, 1995, and so far has weathered the shifting tides of technology, outlasting many of its rivals and adapting to new eras of computing. Java's origins trace back to the early 1990s, when a team at Sun Microsystems led by James Gosling set out to develop a language for interactive television and embedded devices. Initially dubbed "Oak," the project aimed to simplify application development across a range of devices. Gosling famously described Java as "C++ without the guns and knives," a nod to its safer and more streamlined syntax. Gosling, who remains closely associated with Java to this day, described the language as "C++ without guns and knives," a nod to its simpler, safer syntax compared to its predecessor. As the World Wide Web began to take off, Java's focus shifted from consumer electronics to internet applications. The language's defining feature – platform independence – meant that code could be compiled into bytecode and executed on any device with a Java Virtual Machine (JVM). This "write once, run anywhere" capability was groundbreaking, allowing software to run across different operating systems with minimal modification. Java quickly gained traction with web applets and, soon after, enterprise applications. Its rapid rise prompted competitors to react. Microsoft introduced Visual J++, a Java-compatible language for Windows, but the product was discontinued after a legal dispute with Sun over non-compliance with Java's standards. Many universities and colleges offer dedicated Java programming courses and certificates. It is often an introductory language in computer science curricula because of its object-oriented structure. The late 1990s and early 2000s saw significant evolution in Java's capabilities. Features like JavaBeans, JDBC (Java Database Connectivity), and the Swing GUI library broadened its use. The language was eventually split into multiple editions – Standard (SE), Enterprise (EE), and Micro (ME) – tailored for desktop, server, and mobile development, respectively. // Related Stories In 2006, Sun made a pivotal move by open-sourcing Java, releasing the OpenJDK under the GNU General Public License. This move helped cement Java's role in the open-source community and made it even more accessible to developers worldwide. Java's stewardship changed in 2010 when Oracle acquired Sun Microsystems. While the core implementation of Java remained open source, Oracle introduced licensing changes in later years that led some organizations to explore alternatives such as OpenJDK builds from other vendors. Java's influence on enterprise software has been profound. Its robust ecosystem, including frameworks like Spring Boot and Jakarta EE, has made it a go-to choice for organizations seeking reliability and scalability. The language's stability and backward compatibility have ensured that even as trends come and go, Java remains a constant in the back offices of countless businesses. James Gosling remains closely associated with Java to this day. According to industry experts, Java's longevity stems from its adaptability. Brian Fox, CTO of Sonatype, told The Register that Java has endured through changing paradigms, from early web applets to today's cloud-native applications. "Java has outlasted trends, rival languages, and shifting paradigms. It paved the way for open source to enter the enterprise. And, arguably, the enterprise never looked back." While it may no longer be the flashiest programming language around, Java remains one of the most important. It powers enterprise systems, big data platforms, and cloud-native architectures alike. Despite the rise of languages like Python and JavaScript, Java consistently ranks among the most-used programming languages in industry surveys. As Java enters its fourth decade, it shows no signs of fading away. Instead, it stands as a testament to the enduring value of reliability, adaptability, and a vibrant developer community – a language that, for many, is as essential today as it was in 1995.
    0 Reacties ·0 aandelen ·0 voorbeeld
  • OpenAI wants ChatGPT to be a ‘super assistant’ for every part of your life

    Thanks to the legal discovery process, Google’s antitrust trial with the Department of Justice has provided a fascinating glimpse into the future of ChatGPT.An internal OpenAI strategy document titled “ChatGPT: H1 2025 Strategy” describes the company’s aspiration to build an “AI super assistant that deeply understands you and is your interface to the internet.” Although the document is heavily redacted in parts, it reveals that OpenAI aims for ChatGPT to soon develop into much more than a chatbot. “In the first half of next year, we’ll start evolving ChatGPT into a super-assistant: one that knows you, understands what you care about, and helps with any task that a smart, trustworthy, emotionally intelligent person with a computer could do,” reads the document from late 2024. “The timing is right. Models like 02 and 03 are finally smart enough to reliably perform agentic tasks, tools like computer use can boost ChatGPT’s ability to take action, and interaction paradigms like multimodality and generative UI allow both ChatGPT and users to express themselves in the best way for the task.”The document goes on to describe a “super assistant” as “an intelligent entity with T-shaped skills” for both widely applicable and niche tasks. “The broad part is all about making life easier: answering a question, finding a home, contacting a lawyer, joining a gym, planning vacations, buying gifts, managing calendars, keeping track of todos, sending emails.” It mentions coding as an early example of a more niche task.Even when reading around the redactions, it’s clear that OpenAI sees hardware as essential to its future, and that it wants people to think of ChatGPT as not just a tool, but a companion. This tracks with Sam Altman recently saying that young people are using ChatGPT like a “ life advisor.”“Today, ChatGPT is in our lives through existing form factors — our website, phone, and desktop apps,” another part of the strategy document reads. “But our vision for ChatGPT is to help you with all of your life, no matter where you are. At home, it should help answer questions, play music, and suggest recipes. On the go, it should help you get to places, find the best restaurants, or catch up with friends. At work, it should help you take meeting notes, or prepare for the big presentation. And on solo walks, it should help you reflect and wind down.” At the same time, OpenAI finds itself in a wobbly position. Its infrastructure isn’t able to handle ChatGPT’s rising usage, which explains Altman’s focus on building data centers. In a section of the document describing AI chatbot competition, the company writes that “we are leading here, but we can’t rest,” and that “growth and revenue won’t line up forever.” It acknowledges that there are “powerful incumbents who will leverage their distribution to advantage their own products,” and states that OpenAI will advocate for regulation that requires other platforms to allow people to set ChatGPT as the default assistant.“We have what we need to win: one of the fastest-growing products of all time, a category-defining brand, a research lead, a compute lead, a world-class research team, and an increasing number of effective people with agency who are motivated to ship,” the OpenAI document states. “We don’t rely on ads, giving us flexibility on what to build. Our culture values speed, bold moves, and self-disruption. Maintaining these advantages is hard work but, if we do, they will last for a while.”ElsewhereApple chickens out: For the first time in a decade, Apple won’t have its execs participate in John Gruber’s annual post-WWDC live podcast. Gruber recently wrote the viral “something is rotten in the state of Cupertino” essay, which was widely discussed in Apple circles. Although he hasn’t publicly connected that critical piece to the company backing out of his podcast, it’s easy to see the throughline. It says a lot about the state of Apple when its leaders don’t even want to participate in what has historically been a friendly forum.Elon was high: As Elon Musk attempts to reframe the public’s view of him by doing interviews about SpaceX, The New York Times reports that last year, he was taking so much ketamine that it “was affecting his bladder.” He also reportedly “traveled with a daily medication box that held about 20 pills, including ones with the markings of the stimulant Adderall.” Both Musk and the White House have had multiple opportunities to directly refute this report, and they have not. Now, Musk is at least partially stepping away from DOGE along with key lieutenants like Steve Davis. DOGE may be a failure based on Musk’s own stated hopes for spending cuts, but his closeness to Trump has certainly helped rescue X from financial ruin and grown SpaceX’s business. Now, the more difficult work begins: saving Tesla. Overheard“The way we do ranking is sacrosanct to us.” - Google CEO Sundar Pichai on Decoder, explaining why the company’s search results won’t be changed for President Trump or anyone else. “Compared to previous technology changes, I’m a little bit more worried about the labor impact… Yes, people will adapt, but they may not adapt fast enough.” - Anthropic CEO Dario Amodei on CNN raising the alarm about the technology he is developing. “Meta is a very different company than it was nine years ago when they fired me.” - Anduril founder Palmer Luckey telling Ashlee Vance why he is linking up with Mark Zuckerberg to make headsets for the military. Personnel logThe flattening of Meta’s AI organization has taken effect, with VP Ahmad Al-Dahle no longer overseeing the entire group. Now, he co-leads “AGI Foundations” with Amir Frenkel, VP of engineering, while Connor Hayes runs all AI products. All three men now report to Meta CPO Chris Cox, who has diplomatically framed the changes as a way to “give each org more ownership.”Xbox co-founder J Allard is leading a new ‘breakthrough’ devices group called ZeroOne. One of the devices will be smart home-related, according to job listings.C.J. Mahoney, a former Trump administration official, is being promoted to general counsel at Microsoft, which has also hired Lisa Monaco from the last Biden administration to lead global policy. Reed Hastings is joining the board of Anthropic “because I believe in their approach to AI development, and to help humanity progress.”Sebastian Barrios, previously SVP at Mercado Libre, is joining Roblox as SVP of engineering for several areas, including ads, game discovery, and the company’s virtual currency work.Fidji Simo’s replacement at Instacart will be chief business officer Chris Rogers, who will become the company’s next CEO on August 15th after she officially joins OpenAI.Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you have thoughts on this issue or a story idea to share. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    #openai #wants #chatgpt #super #assistant
    OpenAI wants ChatGPT to be a ‘super assistant’ for every part of your life
    Thanks to the legal discovery process, Google’s antitrust trial with the Department of Justice has provided a fascinating glimpse into the future of ChatGPT.An internal OpenAI strategy document titled “ChatGPT: H1 2025 Strategy” describes the company’s aspiration to build an “AI super assistant that deeply understands you and is your interface to the internet.” Although the document is heavily redacted in parts, it reveals that OpenAI aims for ChatGPT to soon develop into much more than a chatbot. “In the first half of next year, we’ll start evolving ChatGPT into a super-assistant: one that knows you, understands what you care about, and helps with any task that a smart, trustworthy, emotionally intelligent person with a computer could do,” reads the document from late 2024. “The timing is right. Models like 02 and 03 are finally smart enough to reliably perform agentic tasks, tools like computer use can boost ChatGPT’s ability to take action, and interaction paradigms like multimodality and generative UI allow both ChatGPT and users to express themselves in the best way for the task.”The document goes on to describe a “super assistant” as “an intelligent entity with T-shaped skills” for both widely applicable and niche tasks. “The broad part is all about making life easier: answering a question, finding a home, contacting a lawyer, joining a gym, planning vacations, buying gifts, managing calendars, keeping track of todos, sending emails.” It mentions coding as an early example of a more niche task.Even when reading around the redactions, it’s clear that OpenAI sees hardware as essential to its future, and that it wants people to think of ChatGPT as not just a tool, but a companion. This tracks with Sam Altman recently saying that young people are using ChatGPT like a “ life advisor.”“Today, ChatGPT is in our lives through existing form factors — our website, phone, and desktop apps,” another part of the strategy document reads. “But our vision for ChatGPT is to help you with all of your life, no matter where you are. At home, it should help answer questions, play music, and suggest recipes. On the go, it should help you get to places, find the best restaurants, or catch up with friends. At work, it should help you take meeting notes, or prepare for the big presentation. And on solo walks, it should help you reflect and wind down.” At the same time, OpenAI finds itself in a wobbly position. Its infrastructure isn’t able to handle ChatGPT’s rising usage, which explains Altman’s focus on building data centers. In a section of the document describing AI chatbot competition, the company writes that “we are leading here, but we can’t rest,” and that “growth and revenue won’t line up forever.” It acknowledges that there are “powerful incumbents who will leverage their distribution to advantage their own products,” and states that OpenAI will advocate for regulation that requires other platforms to allow people to set ChatGPT as the default assistant.“We have what we need to win: one of the fastest-growing products of all time, a category-defining brand, a research lead, a compute lead, a world-class research team, and an increasing number of effective people with agency who are motivated to ship,” the OpenAI document states. “We don’t rely on ads, giving us flexibility on what to build. Our culture values speed, bold moves, and self-disruption. Maintaining these advantages is hard work but, if we do, they will last for a while.”ElsewhereApple chickens out: For the first time in a decade, Apple won’t have its execs participate in John Gruber’s annual post-WWDC live podcast. Gruber recently wrote the viral “something is rotten in the state of Cupertino” essay, which was widely discussed in Apple circles. Although he hasn’t publicly connected that critical piece to the company backing out of his podcast, it’s easy to see the throughline. It says a lot about the state of Apple when its leaders don’t even want to participate in what has historically been a friendly forum.Elon was high: As Elon Musk attempts to reframe the public’s view of him by doing interviews about SpaceX, The New York Times reports that last year, he was taking so much ketamine that it “was affecting his bladder.” He also reportedly “traveled with a daily medication box that held about 20 pills, including ones with the markings of the stimulant Adderall.” Both Musk and the White House have had multiple opportunities to directly refute this report, and they have not. Now, Musk is at least partially stepping away from DOGE along with key lieutenants like Steve Davis. DOGE may be a failure based on Musk’s own stated hopes for spending cuts, but his closeness to Trump has certainly helped rescue X from financial ruin and grown SpaceX’s business. Now, the more difficult work begins: saving Tesla. Overheard“The way we do ranking is sacrosanct to us.” - Google CEO Sundar Pichai on Decoder, explaining why the company’s search results won’t be changed for President Trump or anyone else. “Compared to previous technology changes, I’m a little bit more worried about the labor impact… Yes, people will adapt, but they may not adapt fast enough.” - Anthropic CEO Dario Amodei on CNN raising the alarm about the technology he is developing. “Meta is a very different company than it was nine years ago when they fired me.” - Anduril founder Palmer Luckey telling Ashlee Vance why he is linking up with Mark Zuckerberg to make headsets for the military. Personnel logThe flattening of Meta’s AI organization has taken effect, with VP Ahmad Al-Dahle no longer overseeing the entire group. Now, he co-leads “AGI Foundations” with Amir Frenkel, VP of engineering, while Connor Hayes runs all AI products. All three men now report to Meta CPO Chris Cox, who has diplomatically framed the changes as a way to “give each org more ownership.”Xbox co-founder J Allard is leading a new ‘breakthrough’ devices group called ZeroOne. One of the devices will be smart home-related, according to job listings.C.J. Mahoney, a former Trump administration official, is being promoted to general counsel at Microsoft, which has also hired Lisa Monaco from the last Biden administration to lead global policy. Reed Hastings is joining the board of Anthropic “because I believe in their approach to AI development, and to help humanity progress.”Sebastian Barrios, previously SVP at Mercado Libre, is joining Roblox as SVP of engineering for several areas, including ads, game discovery, and the company’s virtual currency work.Fidji Simo’s replacement at Instacart will be chief business officer Chris Rogers, who will become the company’s next CEO on August 15th after she officially joins OpenAI.Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you have thoughts on this issue or a story idea to share. You can respond here or ping me securely on Signal.Thanks for subscribing.See More: #openai #wants #chatgpt #super #assistant
    OpenAI wants ChatGPT to be a ‘super assistant’ for every part of your life
    www.theverge.com
    Thanks to the legal discovery process, Google’s antitrust trial with the Department of Justice has provided a fascinating glimpse into the future of ChatGPT.An internal OpenAI strategy document titled “ChatGPT: H1 2025 Strategy” describes the company’s aspiration to build an “AI super assistant that deeply understands you and is your interface to the internet.” Although the document is heavily redacted in parts, it reveals that OpenAI aims for ChatGPT to soon develop into much more than a chatbot. “In the first half of next year, we’ll start evolving ChatGPT into a super-assistant: one that knows you, understands what you care about, and helps with any task that a smart, trustworthy, emotionally intelligent person with a computer could do,” reads the document from late 2024. “The timing is right. Models like 02 and 03 are finally smart enough to reliably perform agentic tasks, tools like computer use can boost ChatGPT’s ability to take action, and interaction paradigms like multimodality and generative UI allow both ChatGPT and users to express themselves in the best way for the task.”The document goes on to describe a “super assistant” as “an intelligent entity with T-shaped skills” for both widely applicable and niche tasks. “The broad part is all about making life easier: answering a question, finding a home, contacting a lawyer, joining a gym, planning vacations, buying gifts, managing calendars, keeping track of todos, sending emails.” It mentions coding as an early example of a more niche task.Even when reading around the redactions, it’s clear that OpenAI sees hardware as essential to its future, and that it wants people to think of ChatGPT as not just a tool, but a companion. This tracks with Sam Altman recently saying that young people are using ChatGPT like a “ life advisor.”“Today, ChatGPT is in our lives through existing form factors — our website, phone, and desktop apps,” another part of the strategy document reads. “But our vision for ChatGPT is to help you with all of your life, no matter where you are. At home, it should help answer questions, play music, and suggest recipes. On the go, it should help you get to places, find the best restaurants, or catch up with friends. At work, it should help you take meeting notes, or prepare for the big presentation. And on solo walks, it should help you reflect and wind down.” At the same time, OpenAI finds itself in a wobbly position. Its infrastructure isn’t able to handle ChatGPT’s rising usage, which explains Altman’s focus on building data centers. In a section of the document describing AI chatbot competition, the company writes that “we are leading here, but we can’t rest,” and that “growth and revenue won’t line up forever.” It acknowledges that there are “powerful incumbents who will leverage their distribution to advantage their own products,” and states that OpenAI will advocate for regulation that requires other platforms to allow people to set ChatGPT as the default assistant. (Coincidentally, Apple is rumored to soon let iOS users also select Google’s Gemini for Siri queries. Meta AI just hit one billion users as well, thanks mostly to its many hooks in Instagram, WhatsApp, and Facebook.) “We have what we need to win: one of the fastest-growing products of all time, a category-defining brand, a research lead (reasoning, multimodal), a compute lead, a world-class research team, and an increasing number of effective people with agency who are motivated to ship,” the OpenAI document states. “We don’t rely on ads, giving us flexibility on what to build. Our culture values speed, bold moves, and self-disruption. Maintaining these advantages is hard work but, if we do, they will last for a while.”ElsewhereApple chickens out: For the first time in a decade, Apple won’t have its execs participate in John Gruber’s annual post-WWDC live podcast. Gruber recently wrote the viral “something is rotten in the state of Cupertino” essay, which was widely discussed in Apple circles. Although he hasn’t publicly connected that critical piece to the company backing out of his podcast, it’s easy to see the throughline. It says a lot about the state of Apple when its leaders don’t even want to participate in what has historically been a friendly forum.Elon was high: As Elon Musk attempts to reframe the public’s view of him by doing interviews about SpaceX, The New York Times reports that last year, he was taking so much ketamine that it “was affecting his bladder.” He also reportedly “traveled with a daily medication box that held about 20 pills, including ones with the markings of the stimulant Adderall.” Both Musk and the White House have had multiple opportunities to directly refute this report, and they have not. Now, Musk is at least partially stepping away from DOGE along with key lieutenants like Steve Davis. DOGE may be a failure based on Musk’s own stated hopes for spending cuts, but his closeness to Trump has certainly helped rescue X from financial ruin and grown SpaceX’s business. Now, the more difficult work begins: saving Tesla. Overheard“The way we do ranking is sacrosanct to us.” - Google CEO Sundar Pichai on Decoder, explaining why the company’s search results won’t be changed for President Trump or anyone else. “Compared to previous technology changes, I’m a little bit more worried about the labor impact… Yes, people will adapt, but they may not adapt fast enough.” - Anthropic CEO Dario Amodei on CNN raising the alarm about the technology he is developing. “Meta is a very different company than it was nine years ago when they fired me.” - Anduril founder Palmer Luckey telling Ashlee Vance why he is linking up with Mark Zuckerberg to make headsets for the military. Personnel logThe flattening of Meta’s AI organization has taken effect, with VP Ahmad Al-Dahle no longer overseeing the entire group. Now, he co-leads “AGI Foundations” with Amir Frenkel, VP of engineering, while Connor Hayes runs all AI products. All three men now report to Meta CPO Chris Cox, who has diplomatically framed the changes as a way to “give each org more ownership.”Xbox co-founder J Allard is leading a new ‘breakthrough’ devices group at Amazon called ZeroOne. One of the devices will be smart home-related, according to job listings.C.J. Mahoney, a former Trump administration official, is being promoted to general counsel at Microsoft, which has also hired Lisa Monaco from the last Biden administration to lead global policy. Reed Hastings is joining the board of Anthropic “because I believe in their approach to AI development, and to help humanity progress.” (He’s joining Anthropic’s corporate board, not the supervising board of its public benefit trust that can hire and fire corporate directors.)Sebastian Barrios, previously SVP at Mercado Libre, is joining Roblox as SVP of engineering for several areas, including ads, game discovery, and the company’s virtual currency work.Fidji Simo’s replacement at Instacart will be chief business officer Chris Rogers, who will become the company’s next CEO on August 15th after she officially joins OpenAI.Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you have thoughts on this issue or a story idea to share. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    0 Reacties ·0 aandelen ·0 voorbeeld
  • Fueling seamless AI at scale

    From large language modelsto reasoning agents, today’s AI tools bring unprecedented computational demands. Trillion-parameter models, workloads running on-device, and swarms of agents collaborating to complete tasks all require a new paradigm of computing to become truly seamless and ubiquitous.

    First, technical progress in hardware and silicon design is critical to pushing the boundaries of compute. Second, advances in machine learningallow AI systems to achieve increased efficiency with smaller computational demands. Finally, the integration, orchestration, and adoption of AI into applications, devices, and systems is crucial to delivering tangible impact and value.

    Silicon’s mid-life crisis

    AI has evolved from classical ML to deep learning to generative AI. The most recent chapter, which took AI mainstream, hinges on two phases—training and inference—that are data and energy-intensive in terms of computation, data movement, and cooling. At the same time, Moore’s Law, which determines that the number of transistors on a chip doubles every two years, is reaching a physical and economic plateau.

    For the last 40 years, silicon chips and digital technology have nudged each other forward—every step ahead in processing capability frees the imagination of innovators to envision new products, which require yet more power to run. That is happening at light speed in the AI age.

    As models become more readily available, deployment at scale puts the spotlight on inference and the application of trained models for everyday use cases. This transition requires the appropriate hardware to handle inference tasks efficiently. Central processing unitshave managed general computing tasks for decades, but the broad adoption of ML introduced computational demands that stretched the capabilities of traditional CPUs. This has led to the adoption of graphics processing unitsand other accelerator chips for training complex neural networks, due to their parallel execution capabilities and high memory bandwidth that allow large-scale mathematical operations to be processed efficiently.

    But CPUs are already the most widely deployed and can be companions to processors like GPUs and tensor processing units. AI developers are also hesitant to adapt software to fit specialized or bespoke hardware, and they favor the consistency and ubiquity of CPUs. Chip designers are unlocking performance gains through optimized software tooling, adding novel processing features and data types specifically to serve ML workloads, integrating specialized units and accelerators, and advancing silicon chip innovations, including custom silicon. AI itself is a helpful aid for chip design, creating a positive feedback loop in which AI helps optimize the chips that it needs to run. These enhancements and strong software support mean modern CPUs are a good choice to handle a range of inference tasks.

    Beyond silicon-based processors, disruptive technologies are emerging to address growing AI compute and data demands. The unicorn start-up Lightmatter, for instance, introduced photonic computing solutions that use light for data transmission to generate significant improvements in speed and energy efficiency. Quantum computing represents another promising area in AI hardware. While still years or even decades away, the integration of quantum computing with AI could further transform fields like drug discovery and genomics.

    Understanding models and paradigms

    The developments in ML theories and network architectures have significantly enhanced the efficiency and capabilities of AI models. Today, the industry is moving from monolithic models to agent-based systems characterized by smaller, specialized models that work together to complete tasks more efficiently at the edge—on devices like smartphones or modern vehicles. This allows them to extract increased performance gains, like faster model response times, from the same or even less compute.

    Researchers have developed techniques, including few-shot learning, to train AI models using smaller datasets and fewer training iterations. AI systems can learn new tasks from a limited number of examples to reduce dependency on large datasets and lower energy demands. Optimization techniques like quantization, which lower the memory requirements by selectively reducing precision, are helping reduce model sizes without sacrificing performance. 

    New system architectures, like retrieval-augmented generation, have streamlined data access during both training and inference to reduce computational costs and overhead. The DeepSeek R1, an open source LLM, is a compelling example of how more output can be extracted using the same hardware. By applying reinforcement learning techniques in novel ways, R1 has achieved advanced reasoning capabilities while using far fewer computational resources in some contexts.

    The integration of heterogeneous computing architectures, which combine various processing units like CPUs, GPUs, and specialized accelerators, has further optimized AI model performance. This approach allows for the efficient distribution of workloads across different hardware components to optimize computational throughput and energy efficiency based on the use case.

    Orchestrating AI

    As AI becomes an ambient capability humming in the background of many tasks and workflows, agents are taking charge and making decisions in real-world scenarios. These range from customer support to edge use cases, where multiple agents coordinate and handle localized tasks across devices.

    With AI increasingly used in daily life, the role of user experiences becomes critical for mass adoption. Features like predictive text in touch keyboards, and adaptive gearboxes in vehicles, offer glimpses of AI as a vital enabler to improve technology interactions for users.

    Edge processing is also accelerating the diffusion of AI into everyday applications, bringing computational capabilities closer to the source of data generation. Smart cameras, autonomous vehicles, and wearable technology now process information locally to reduce latency and improve efficiency. Advances in CPU design and energy-efficient chips have made it feasible to perform complex AI tasks on devices with limited power resources. This shift toward heterogeneous compute enhances the development of ambient intelligence, where interconnected devices create responsive environments that adapt to user needs.

    Seamless AI naturally requires common standards, frameworks, and platforms to bring the industry together. Contemporary AI brings new risks. For instance, by adding more complex software and personalized experiences to consumer devices, it expands the attack surface for hackers, requiring stronger security at both the software and silicon levels, including cryptographic safeguards and transforming the trust model of compute environments.

    More than 70% of respondents to a 2024 DarkTrace survey reported that AI-powered cyber threats significantly impact their organizations, while 60% say their organizations are not adequately prepared to defend against AI-powered attacks.

    Collaboration is essential to forging common frameworks. Universities contribute foundational research, companies apply findings to develop practical solutions, and governments establish policies for ethical and responsible deployment. Organizations like Anthropic are setting industry standards by introducing frameworks, such as the Model Context Protocol, to unify the way developers connect AI systems with data. Arm is another leader in driving standards-based and open source initiatives, including ecosystem development to accelerate and harmonize the chiplet market, where chips are stacked together through common frameworks and standards. Arm also helps optimize open source AI frameworks and models for inference on the Arm compute platform, without needing customized tuning. 

    How far AI goes to becoming a general-purpose technology, like electricity or semiconductors, is being shaped by technical decisions taken today. Hardware-agnostic platforms, standards-based approaches, and continued incremental improvements to critical workhorses like CPUs, all help deliver the promise of AI as a seamless and silent capability for individuals and businesses alike. Open source contributions are also helpful in allowing a broader range of stakeholders to participate in AI advances. By sharing tools and knowledge, the community can cultivate innovation and help ensure that the benefits of AI are accessible to everyone, everywhere.

    Learn more about Arm’s approach to enabling AI everywhere.

    This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

    This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
    #fueling #seamless #scale
    Fueling seamless AI at scale
    From large language modelsto reasoning agents, today’s AI tools bring unprecedented computational demands. Trillion-parameter models, workloads running on-device, and swarms of agents collaborating to complete tasks all require a new paradigm of computing to become truly seamless and ubiquitous. First, technical progress in hardware and silicon design is critical to pushing the boundaries of compute. Second, advances in machine learningallow AI systems to achieve increased efficiency with smaller computational demands. Finally, the integration, orchestration, and adoption of AI into applications, devices, and systems is crucial to delivering tangible impact and value. Silicon’s mid-life crisis AI has evolved from classical ML to deep learning to generative AI. The most recent chapter, which took AI mainstream, hinges on two phases—training and inference—that are data and energy-intensive in terms of computation, data movement, and cooling. At the same time, Moore’s Law, which determines that the number of transistors on a chip doubles every two years, is reaching a physical and economic plateau. For the last 40 years, silicon chips and digital technology have nudged each other forward—every step ahead in processing capability frees the imagination of innovators to envision new products, which require yet more power to run. That is happening at light speed in the AI age. As models become more readily available, deployment at scale puts the spotlight on inference and the application of trained models for everyday use cases. This transition requires the appropriate hardware to handle inference tasks efficiently. Central processing unitshave managed general computing tasks for decades, but the broad adoption of ML introduced computational demands that stretched the capabilities of traditional CPUs. This has led to the adoption of graphics processing unitsand other accelerator chips for training complex neural networks, due to their parallel execution capabilities and high memory bandwidth that allow large-scale mathematical operations to be processed efficiently. But CPUs are already the most widely deployed and can be companions to processors like GPUs and tensor processing units. AI developers are also hesitant to adapt software to fit specialized or bespoke hardware, and they favor the consistency and ubiquity of CPUs. Chip designers are unlocking performance gains through optimized software tooling, adding novel processing features and data types specifically to serve ML workloads, integrating specialized units and accelerators, and advancing silicon chip innovations, including custom silicon. AI itself is a helpful aid for chip design, creating a positive feedback loop in which AI helps optimize the chips that it needs to run. These enhancements and strong software support mean modern CPUs are a good choice to handle a range of inference tasks. Beyond silicon-based processors, disruptive technologies are emerging to address growing AI compute and data demands. The unicorn start-up Lightmatter, for instance, introduced photonic computing solutions that use light for data transmission to generate significant improvements in speed and energy efficiency. Quantum computing represents another promising area in AI hardware. While still years or even decades away, the integration of quantum computing with AI could further transform fields like drug discovery and genomics. Understanding models and paradigms The developments in ML theories and network architectures have significantly enhanced the efficiency and capabilities of AI models. Today, the industry is moving from monolithic models to agent-based systems characterized by smaller, specialized models that work together to complete tasks more efficiently at the edge—on devices like smartphones or modern vehicles. This allows them to extract increased performance gains, like faster model response times, from the same or even less compute. Researchers have developed techniques, including few-shot learning, to train AI models using smaller datasets and fewer training iterations. AI systems can learn new tasks from a limited number of examples to reduce dependency on large datasets and lower energy demands. Optimization techniques like quantization, which lower the memory requirements by selectively reducing precision, are helping reduce model sizes without sacrificing performance.  New system architectures, like retrieval-augmented generation, have streamlined data access during both training and inference to reduce computational costs and overhead. The DeepSeek R1, an open source LLM, is a compelling example of how more output can be extracted using the same hardware. By applying reinforcement learning techniques in novel ways, R1 has achieved advanced reasoning capabilities while using far fewer computational resources in some contexts. The integration of heterogeneous computing architectures, which combine various processing units like CPUs, GPUs, and specialized accelerators, has further optimized AI model performance. This approach allows for the efficient distribution of workloads across different hardware components to optimize computational throughput and energy efficiency based on the use case. Orchestrating AI As AI becomes an ambient capability humming in the background of many tasks and workflows, agents are taking charge and making decisions in real-world scenarios. These range from customer support to edge use cases, where multiple agents coordinate and handle localized tasks across devices. With AI increasingly used in daily life, the role of user experiences becomes critical for mass adoption. Features like predictive text in touch keyboards, and adaptive gearboxes in vehicles, offer glimpses of AI as a vital enabler to improve technology interactions for users. Edge processing is also accelerating the diffusion of AI into everyday applications, bringing computational capabilities closer to the source of data generation. Smart cameras, autonomous vehicles, and wearable technology now process information locally to reduce latency and improve efficiency. Advances in CPU design and energy-efficient chips have made it feasible to perform complex AI tasks on devices with limited power resources. This shift toward heterogeneous compute enhances the development of ambient intelligence, where interconnected devices create responsive environments that adapt to user needs. Seamless AI naturally requires common standards, frameworks, and platforms to bring the industry together. Contemporary AI brings new risks. For instance, by adding more complex software and personalized experiences to consumer devices, it expands the attack surface for hackers, requiring stronger security at both the software and silicon levels, including cryptographic safeguards and transforming the trust model of compute environments. More than 70% of respondents to a 2024 DarkTrace survey reported that AI-powered cyber threats significantly impact their organizations, while 60% say their organizations are not adequately prepared to defend against AI-powered attacks. Collaboration is essential to forging common frameworks. Universities contribute foundational research, companies apply findings to develop practical solutions, and governments establish policies for ethical and responsible deployment. Organizations like Anthropic are setting industry standards by introducing frameworks, such as the Model Context Protocol, to unify the way developers connect AI systems with data. Arm is another leader in driving standards-based and open source initiatives, including ecosystem development to accelerate and harmonize the chiplet market, where chips are stacked together through common frameworks and standards. Arm also helps optimize open source AI frameworks and models for inference on the Arm compute platform, without needing customized tuning.  How far AI goes to becoming a general-purpose technology, like electricity or semiconductors, is being shaped by technical decisions taken today. Hardware-agnostic platforms, standards-based approaches, and continued incremental improvements to critical workhorses like CPUs, all help deliver the promise of AI as a seamless and silent capability for individuals and businesses alike. Open source contributions are also helpful in allowing a broader range of stakeholders to participate in AI advances. By sharing tools and knowledge, the community can cultivate innovation and help ensure that the benefits of AI are accessible to everyone, everywhere. Learn more about Arm’s approach to enabling AI everywhere. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review. #fueling #seamless #scale
    Fueling seamless AI at scale
    www.technologyreview.com
    From large language models (LLMs) to reasoning agents, today’s AI tools bring unprecedented computational demands. Trillion-parameter models, workloads running on-device, and swarms of agents collaborating to complete tasks all require a new paradigm of computing to become truly seamless and ubiquitous. First, technical progress in hardware and silicon design is critical to pushing the boundaries of compute. Second, advances in machine learning (ML) allow AI systems to achieve increased efficiency with smaller computational demands. Finally, the integration, orchestration, and adoption of AI into applications, devices, and systems is crucial to delivering tangible impact and value. Silicon’s mid-life crisis AI has evolved from classical ML to deep learning to generative AI. The most recent chapter, which took AI mainstream, hinges on two phases—training and inference—that are data and energy-intensive in terms of computation, data movement, and cooling. At the same time, Moore’s Law, which determines that the number of transistors on a chip doubles every two years, is reaching a physical and economic plateau. For the last 40 years, silicon chips and digital technology have nudged each other forward—every step ahead in processing capability frees the imagination of innovators to envision new products, which require yet more power to run. That is happening at light speed in the AI age. As models become more readily available, deployment at scale puts the spotlight on inference and the application of trained models for everyday use cases. This transition requires the appropriate hardware to handle inference tasks efficiently. Central processing units (CPUs) have managed general computing tasks for decades, but the broad adoption of ML introduced computational demands that stretched the capabilities of traditional CPUs. This has led to the adoption of graphics processing units (GPUs) and other accelerator chips for training complex neural networks, due to their parallel execution capabilities and high memory bandwidth that allow large-scale mathematical operations to be processed efficiently. But CPUs are already the most widely deployed and can be companions to processors like GPUs and tensor processing units (TPUs). AI developers are also hesitant to adapt software to fit specialized or bespoke hardware, and they favor the consistency and ubiquity of CPUs. Chip designers are unlocking performance gains through optimized software tooling, adding novel processing features and data types specifically to serve ML workloads, integrating specialized units and accelerators, and advancing silicon chip innovations, including custom silicon. AI itself is a helpful aid for chip design, creating a positive feedback loop in which AI helps optimize the chips that it needs to run. These enhancements and strong software support mean modern CPUs are a good choice to handle a range of inference tasks. Beyond silicon-based processors, disruptive technologies are emerging to address growing AI compute and data demands. The unicorn start-up Lightmatter, for instance, introduced photonic computing solutions that use light for data transmission to generate significant improvements in speed and energy efficiency. Quantum computing represents another promising area in AI hardware. While still years or even decades away, the integration of quantum computing with AI could further transform fields like drug discovery and genomics. Understanding models and paradigms The developments in ML theories and network architectures have significantly enhanced the efficiency and capabilities of AI models. Today, the industry is moving from monolithic models to agent-based systems characterized by smaller, specialized models that work together to complete tasks more efficiently at the edge—on devices like smartphones or modern vehicles. This allows them to extract increased performance gains, like faster model response times, from the same or even less compute. Researchers have developed techniques, including few-shot learning, to train AI models using smaller datasets and fewer training iterations. AI systems can learn new tasks from a limited number of examples to reduce dependency on large datasets and lower energy demands. Optimization techniques like quantization, which lower the memory requirements by selectively reducing precision, are helping reduce model sizes without sacrificing performance.  New system architectures, like retrieval-augmented generation (RAG), have streamlined data access during both training and inference to reduce computational costs and overhead. The DeepSeek R1, an open source LLM, is a compelling example of how more output can be extracted using the same hardware. By applying reinforcement learning techniques in novel ways, R1 has achieved advanced reasoning capabilities while using far fewer computational resources in some contexts. The integration of heterogeneous computing architectures, which combine various processing units like CPUs, GPUs, and specialized accelerators, has further optimized AI model performance. This approach allows for the efficient distribution of workloads across different hardware components to optimize computational throughput and energy efficiency based on the use case. Orchestrating AI As AI becomes an ambient capability humming in the background of many tasks and workflows, agents are taking charge and making decisions in real-world scenarios. These range from customer support to edge use cases, where multiple agents coordinate and handle localized tasks across devices. With AI increasingly used in daily life, the role of user experiences becomes critical for mass adoption. Features like predictive text in touch keyboards, and adaptive gearboxes in vehicles, offer glimpses of AI as a vital enabler to improve technology interactions for users. Edge processing is also accelerating the diffusion of AI into everyday applications, bringing computational capabilities closer to the source of data generation. Smart cameras, autonomous vehicles, and wearable technology now process information locally to reduce latency and improve efficiency. Advances in CPU design and energy-efficient chips have made it feasible to perform complex AI tasks on devices with limited power resources. This shift toward heterogeneous compute enhances the development of ambient intelligence, where interconnected devices create responsive environments that adapt to user needs. Seamless AI naturally requires common standards, frameworks, and platforms to bring the industry together. Contemporary AI brings new risks. For instance, by adding more complex software and personalized experiences to consumer devices, it expands the attack surface for hackers, requiring stronger security at both the software and silicon levels, including cryptographic safeguards and transforming the trust model of compute environments. More than 70% of respondents to a 2024 DarkTrace survey reported that AI-powered cyber threats significantly impact their organizations, while 60% say their organizations are not adequately prepared to defend against AI-powered attacks. Collaboration is essential to forging common frameworks. Universities contribute foundational research, companies apply findings to develop practical solutions, and governments establish policies for ethical and responsible deployment. Organizations like Anthropic are setting industry standards by introducing frameworks, such as the Model Context Protocol, to unify the way developers connect AI systems with data. Arm is another leader in driving standards-based and open source initiatives, including ecosystem development to accelerate and harmonize the chiplet market, where chips are stacked together through common frameworks and standards. Arm also helps optimize open source AI frameworks and models for inference on the Arm compute platform, without needing customized tuning.  How far AI goes to becoming a general-purpose technology, like electricity or semiconductors, is being shaped by technical decisions taken today. Hardware-agnostic platforms, standards-based approaches, and continued incremental improvements to critical workhorses like CPUs, all help deliver the promise of AI as a seamless and silent capability for individuals and businesses alike. Open source contributions are also helpful in allowing a broader range of stakeholders to participate in AI advances. By sharing tools and knowledge, the community can cultivate innovation and help ensure that the benefits of AI are accessible to everyone, everywhere. Learn more about Arm’s approach to enabling AI everywhere. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
    0 Reacties ·0 aandelen ·0 voorbeeld
CGShares https://cgshares.com