• L'analyse SWOT, c'est un truc pour voir où tu en es. Tu identifies tes forces, tes faiblesses, les opportunités et les menaces. En gros, ça peut t'aider à prendre des décisions. Pas très excitant, mais bon, ça sert à quelque chose. Si tu veux un modèle gratuit, il y en a plein en ligne. Bref, voilà.

    #SWOT #Analyse #Décisions #Entrepreneuriat #ModèleGratuit
    L'analyse SWOT, c'est un truc pour voir où tu en es. Tu identifies tes forces, tes faiblesses, les opportunités et les menaces. En gros, ça peut t'aider à prendre des décisions. Pas très excitant, mais bon, ça sert à quelque chose. Si tu veux un modèle gratuit, il y en a plein en ligne. Bref, voilà. #SWOT #Analyse #Décisions #Entrepreneuriat #ModèleGratuit
    WWW.SEMRUSH.COM
    SWOT Analysis: How to Do It, Examples, & Free Template
    A SWOT analysis identifies your strengths, weaknesses, opportunities, and threats to guide decision-making.
    1 Commentarii 0 Distribuiri
  • The AI execution gap: Why 80% of projects don’t reach production

    Enterprise artificial intelligence investment is unprecedented, with IDC projecting global spending on AI and GenAI to double to billion by 2028. Yet beneath the impressive budget allocations and boardroom enthusiasm lies a troubling reality: most organisations struggle to translate their AI ambitions into operational success.The sobering statistics behind AI’s promiseModelOp’s 2025 AI Governance Benchmark Report, based on input from 100 senior AI and data leaders at Fortune 500 enterprises, reveals a disconnect between aspiration and execution.While more than 80% of enterprises have 51 or more generative AI projects in proposal phases, only 18% have successfully deployed more than 20 models into production.The execution gap represents one of the most significant challenges facing enterprise AI today. Most generative AI projects still require 6 to 18 months to go live – if they reach production at all.The result is delayed returns on investment, frustrated stakeholders, and diminished confidence in AI initiatives in the enterprise.The cause: Structural, not technical barriersThe biggest obstacles preventing AI scalability aren’t technical limitations – they’re structural inefficiencies plaguing enterprise operations. The ModelOp benchmark report identifies several problems that create what experts call a “time-to-market quagmire.”Fragmented systems plague implementation. 58% of organisations cite fragmented systems as the top obstacle to adopting governance platforms. Fragmentation creates silos where different departments use incompatible tools and processes, making it nearly impossible to maintain consistent oversight in AI initiatives.Manual processes dominate despite digital transformation. 55% of enterprises still rely on manual processes – including spreadsheets and email – to manage AI use case intake. The reliance on antiquated methods creates bottlenecks, increases the likelihood of errors, and makes it difficult to scale AI operations.Lack of standardisation hampers progress. Only 23% of organisations implement standardised intake, development, and model management processes. Without these elements, each AI project becomes a unique challenge requiring custom solutions and extensive coordination by multiple teams.Enterprise-level oversight remains rare Just 14% of companies perform AI assurance at the enterprise level, increasing the risk of duplicated efforts and inconsistent oversight. The lack of centralised governance means organisations often discover they’re solving the same problems multiple times in different departments.The governance revolution: From obstacle to acceleratorA change is taking place in how enterprises view AI governance. Rather than seeing it as a compliance burden that slows innovation, forward-thinking organisations recognise governance as an important enabler of scale and speed.Leadership alignment signals strategic shift. The ModelOp benchmark data reveals a change in organisational structure: 46% of companies now assign accountability for AI governance to a Chief Innovation Officer – more than four times the number who place accountability under Legal or Compliance. This strategic repositioning reflects a new understanding that governance isn’t solely about risk management, but can enable innovation.Investment follows strategic priority. A financial commitment to AI governance underscores its importance. According to the report, 36% of enterprises have budgeted at least million annually for AI governance software, while 54% have allocated resources specifically for AI Portfolio Intelligence to track value and ROI.What high-performing organisations do differentlyThe enterprises that successfully bridge the ‘execution gap’ share several characteristics in their approach to AI implementation:Standardised processes from day one. Leading organisations implement standardised intake, development, and model review processes in AI initiatives. Consistency eliminates the need to reinvent workflows for each project and ensures that all stakeholders understand their responsibilities.Centralised documentation and inventory. Rather than allowing AI assets to proliferate in disconnected systems, successful enterprises maintain centralised inventories that provide visibility into every model’s status, performance, and compliance posture.Automated governance checkpoints. High-performing organisations embed automated governance checkpoints throughout the AI lifecycle, helping ensure compliance requirements and risk assessments are addressed systematically rather than as afterthoughts.End-to-end traceability. Leading enterprises maintain complete traceability of their AI models, including data sources, training methods, validation results, and performance metrics.Measurable impact of structured governanceThe benefits of implementing comprehensive AI governance extend beyond compliance. Organisations that adopt lifecycle automation platforms reportedly see dramatic improvements in operational efficiency and business outcomes.A financial services firm profiled in the ModelOp report experienced a halving of time to production and an 80% reduction in issue resolution time after implementing automated governance processes. Such improvements translate directly into faster time-to-value and increased confidence among business stakeholders.Enterprises with robust governance frameworks report the ability to many times more models simultaneously while maintaining oversight and control. This scalability lets organisations pursue AI initiatives in multiple business units without overwhelming their operational capabilities.The path forward: From stuck to scaledThe message from industry leaders that the gap between AI ambition and execution is solvable, but it requires a shift in approach. Rather than treating governance as a necessary evil, enterprises should realise it enables AI innovation at scale.Immediate action items for AI leadersOrganisations looking to escape the ‘time-to-market quagmire’ should prioritise the following:Audit current state: Conduct an assessment of existing AI initiatives, identifying fragmented processes and manual bottlenecksStandardise workflows: Implement consistent processes for AI use case intake, development, and deployment in all business unitsInvest in integration: Deploy platforms to unify disparate tools and systems under a single governance frameworkEstablish enterprise oversight: Create centralised visibility into all AI initiatives with real-time monitoring and reporting abilitiesThe competitive advantage of getting it rightOrganisations that can solve the execution challenge will be able to bring AI solutions to market faster, scale more efficiently, and maintain the trust of stakeholders and regulators.Enterprises that continue with fragmented processes and manual workflows will find themselves disadvantaged compared to their more organised competitors. Operational excellence isn’t about efficiency but survival.The data shows enterprise AI investment will continue to grow. Therefore, the question isn’t whether organisations will invest in AI, but whether they’ll develop the operational abilities necessary to realise return on investment. The opportunity to lead in the AI-driven economy has never been greater for those willing to embrace governance as an enabler not an obstacle.
    #execution #gap #why #projects #dont
    The AI execution gap: Why 80% of projects don’t reach production
    Enterprise artificial intelligence investment is unprecedented, with IDC projecting global spending on AI and GenAI to double to billion by 2028. Yet beneath the impressive budget allocations and boardroom enthusiasm lies a troubling reality: most organisations struggle to translate their AI ambitions into operational success.The sobering statistics behind AI’s promiseModelOp’s 2025 AI Governance Benchmark Report, based on input from 100 senior AI and data leaders at Fortune 500 enterprises, reveals a disconnect between aspiration and execution.While more than 80% of enterprises have 51 or more generative AI projects in proposal phases, only 18% have successfully deployed more than 20 models into production.The execution gap represents one of the most significant challenges facing enterprise AI today. Most generative AI projects still require 6 to 18 months to go live – if they reach production at all.The result is delayed returns on investment, frustrated stakeholders, and diminished confidence in AI initiatives in the enterprise.The cause: Structural, not technical barriersThe biggest obstacles preventing AI scalability aren’t technical limitations – they’re structural inefficiencies plaguing enterprise operations. The ModelOp benchmark report identifies several problems that create what experts call a “time-to-market quagmire.”Fragmented systems plague implementation. 58% of organisations cite fragmented systems as the top obstacle to adopting governance platforms. Fragmentation creates silos where different departments use incompatible tools and processes, making it nearly impossible to maintain consistent oversight in AI initiatives.Manual processes dominate despite digital transformation. 55% of enterprises still rely on manual processes – including spreadsheets and email – to manage AI use case intake. The reliance on antiquated methods creates bottlenecks, increases the likelihood of errors, and makes it difficult to scale AI operations.Lack of standardisation hampers progress. Only 23% of organisations implement standardised intake, development, and model management processes. Without these elements, each AI project becomes a unique challenge requiring custom solutions and extensive coordination by multiple teams.Enterprise-level oversight remains rare Just 14% of companies perform AI assurance at the enterprise level, increasing the risk of duplicated efforts and inconsistent oversight. The lack of centralised governance means organisations often discover they’re solving the same problems multiple times in different departments.The governance revolution: From obstacle to acceleratorA change is taking place in how enterprises view AI governance. Rather than seeing it as a compliance burden that slows innovation, forward-thinking organisations recognise governance as an important enabler of scale and speed.Leadership alignment signals strategic shift. The ModelOp benchmark data reveals a change in organisational structure: 46% of companies now assign accountability for AI governance to a Chief Innovation Officer – more than four times the number who place accountability under Legal or Compliance. This strategic repositioning reflects a new understanding that governance isn’t solely about risk management, but can enable innovation.Investment follows strategic priority. A financial commitment to AI governance underscores its importance. According to the report, 36% of enterprises have budgeted at least million annually for AI governance software, while 54% have allocated resources specifically for AI Portfolio Intelligence to track value and ROI.What high-performing organisations do differentlyThe enterprises that successfully bridge the ‘execution gap’ share several characteristics in their approach to AI implementation:Standardised processes from day one. Leading organisations implement standardised intake, development, and model review processes in AI initiatives. Consistency eliminates the need to reinvent workflows for each project and ensures that all stakeholders understand their responsibilities.Centralised documentation and inventory. Rather than allowing AI assets to proliferate in disconnected systems, successful enterprises maintain centralised inventories that provide visibility into every model’s status, performance, and compliance posture.Automated governance checkpoints. High-performing organisations embed automated governance checkpoints throughout the AI lifecycle, helping ensure compliance requirements and risk assessments are addressed systematically rather than as afterthoughts.End-to-end traceability. Leading enterprises maintain complete traceability of their AI models, including data sources, training methods, validation results, and performance metrics.Measurable impact of structured governanceThe benefits of implementing comprehensive AI governance extend beyond compliance. Organisations that adopt lifecycle automation platforms reportedly see dramatic improvements in operational efficiency and business outcomes.A financial services firm profiled in the ModelOp report experienced a halving of time to production and an 80% reduction in issue resolution time after implementing automated governance processes. Such improvements translate directly into faster time-to-value and increased confidence among business stakeholders.Enterprises with robust governance frameworks report the ability to many times more models simultaneously while maintaining oversight and control. This scalability lets organisations pursue AI initiatives in multiple business units without overwhelming their operational capabilities.The path forward: From stuck to scaledThe message from industry leaders that the gap between AI ambition and execution is solvable, but it requires a shift in approach. Rather than treating governance as a necessary evil, enterprises should realise it enables AI innovation at scale.Immediate action items for AI leadersOrganisations looking to escape the ‘time-to-market quagmire’ should prioritise the following:Audit current state: Conduct an assessment of existing AI initiatives, identifying fragmented processes and manual bottlenecksStandardise workflows: Implement consistent processes for AI use case intake, development, and deployment in all business unitsInvest in integration: Deploy platforms to unify disparate tools and systems under a single governance frameworkEstablish enterprise oversight: Create centralised visibility into all AI initiatives with real-time monitoring and reporting abilitiesThe competitive advantage of getting it rightOrganisations that can solve the execution challenge will be able to bring AI solutions to market faster, scale more efficiently, and maintain the trust of stakeholders and regulators.Enterprises that continue with fragmented processes and manual workflows will find themselves disadvantaged compared to their more organised competitors. Operational excellence isn’t about efficiency but survival.The data shows enterprise AI investment will continue to grow. Therefore, the question isn’t whether organisations will invest in AI, but whether they’ll develop the operational abilities necessary to realise return on investment. The opportunity to lead in the AI-driven economy has never been greater for those willing to embrace governance as an enabler not an obstacle. #execution #gap #why #projects #dont
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    The AI execution gap: Why 80% of projects don’t reach production
    Enterprise artificial intelligence investment is unprecedented, with IDC projecting global spending on AI and GenAI to double to $631 billion by 2028. Yet beneath the impressive budget allocations and boardroom enthusiasm lies a troubling reality: most organisations struggle to translate their AI ambitions into operational success.The sobering statistics behind AI’s promiseModelOp’s 2025 AI Governance Benchmark Report, based on input from 100 senior AI and data leaders at Fortune 500 enterprises, reveals a disconnect between aspiration and execution.While more than 80% of enterprises have 51 or more generative AI projects in proposal phases, only 18% have successfully deployed more than 20 models into production.The execution gap represents one of the most significant challenges facing enterprise AI today. Most generative AI projects still require 6 to 18 months to go live – if they reach production at all.The result is delayed returns on investment, frustrated stakeholders, and diminished confidence in AI initiatives in the enterprise.The cause: Structural, not technical barriersThe biggest obstacles preventing AI scalability aren’t technical limitations – they’re structural inefficiencies plaguing enterprise operations. The ModelOp benchmark report identifies several problems that create what experts call a “time-to-market quagmire.”Fragmented systems plague implementation. 58% of organisations cite fragmented systems as the top obstacle to adopting governance platforms. Fragmentation creates silos where different departments use incompatible tools and processes, making it nearly impossible to maintain consistent oversight in AI initiatives.Manual processes dominate despite digital transformation. 55% of enterprises still rely on manual processes – including spreadsheets and email – to manage AI use case intake. The reliance on antiquated methods creates bottlenecks, increases the likelihood of errors, and makes it difficult to scale AI operations.Lack of standardisation hampers progress. Only 23% of organisations implement standardised intake, development, and model management processes. Without these elements, each AI project becomes a unique challenge requiring custom solutions and extensive coordination by multiple teams.Enterprise-level oversight remains rare Just 14% of companies perform AI assurance at the enterprise level, increasing the risk of duplicated efforts and inconsistent oversight. The lack of centralised governance means organisations often discover they’re solving the same problems multiple times in different departments.The governance revolution: From obstacle to acceleratorA change is taking place in how enterprises view AI governance. Rather than seeing it as a compliance burden that slows innovation, forward-thinking organisations recognise governance as an important enabler of scale and speed.Leadership alignment signals strategic shift. The ModelOp benchmark data reveals a change in organisational structure: 46% of companies now assign accountability for AI governance to a Chief Innovation Officer – more than four times the number who place accountability under Legal or Compliance. This strategic repositioning reflects a new understanding that governance isn’t solely about risk management, but can enable innovation.Investment follows strategic priority. A financial commitment to AI governance underscores its importance. According to the report, 36% of enterprises have budgeted at least $1 million annually for AI governance software, while 54% have allocated resources specifically for AI Portfolio Intelligence to track value and ROI.What high-performing organisations do differentlyThe enterprises that successfully bridge the ‘execution gap’ share several characteristics in their approach to AI implementation:Standardised processes from day one. Leading organisations implement standardised intake, development, and model review processes in AI initiatives. Consistency eliminates the need to reinvent workflows for each project and ensures that all stakeholders understand their responsibilities.Centralised documentation and inventory. Rather than allowing AI assets to proliferate in disconnected systems, successful enterprises maintain centralised inventories that provide visibility into every model’s status, performance, and compliance posture.Automated governance checkpoints. High-performing organisations embed automated governance checkpoints throughout the AI lifecycle, helping ensure compliance requirements and risk assessments are addressed systematically rather than as afterthoughts.End-to-end traceability. Leading enterprises maintain complete traceability of their AI models, including data sources, training methods, validation results, and performance metrics.Measurable impact of structured governanceThe benefits of implementing comprehensive AI governance extend beyond compliance. Organisations that adopt lifecycle automation platforms reportedly see dramatic improvements in operational efficiency and business outcomes.A financial services firm profiled in the ModelOp report experienced a halving of time to production and an 80% reduction in issue resolution time after implementing automated governance processes. Such improvements translate directly into faster time-to-value and increased confidence among business stakeholders.Enterprises with robust governance frameworks report the ability to many times more models simultaneously while maintaining oversight and control. This scalability lets organisations pursue AI initiatives in multiple business units without overwhelming their operational capabilities.The path forward: From stuck to scaledThe message from industry leaders that the gap between AI ambition and execution is solvable, but it requires a shift in approach. Rather than treating governance as a necessary evil, enterprises should realise it enables AI innovation at scale.Immediate action items for AI leadersOrganisations looking to escape the ‘time-to-market quagmire’ should prioritise the following:Audit current state: Conduct an assessment of existing AI initiatives, identifying fragmented processes and manual bottlenecksStandardise workflows: Implement consistent processes for AI use case intake, development, and deployment in all business unitsInvest in integration: Deploy platforms to unify disparate tools and systems under a single governance frameworkEstablish enterprise oversight: Create centralised visibility into all AI initiatives with real-time monitoring and reporting abilitiesThe competitive advantage of getting it rightOrganisations that can solve the execution challenge will be able to bring AI solutions to market faster, scale more efficiently, and maintain the trust of stakeholders and regulators.Enterprises that continue with fragmented processes and manual workflows will find themselves disadvantaged compared to their more organised competitors. Operational excellence isn’t about efficiency but survival.The data shows enterprise AI investment will continue to grow. Therefore, the question isn’t whether organisations will invest in AI, but whether they’ll develop the operational abilities necessary to realise return on investment. The opportunity to lead in the AI-driven economy has never been greater for those willing to embrace governance as an enabler not an obstacle.(Image source: Unsplash)
    Like
    Love
    Wow
    Angry
    Sad
    598
    0 Commentarii 0 Distribuiri
  • Archaeologists Stumble Onto Sprawling Ancient Roman Villa During Construction of a Road in France

    Cool Finds

    Archaeologists Stumble Onto Sprawling Ancient Roman Villa During Construction of a Road in France
    Located near Auxerre, the grand estate once possessed an exorbitant level of wealth, with thermal baths and heated floors

    Aerial view of the villa, with thermal baths at the bottom right, the garden and fountain in the center, and the agricultural fields expanding to the left
    Ch. Fouquin / INRAP

    In ancient times, all roads led to Rome—or so the saying goes. Nowadays, new roads can lead to Roman ruins.
    During construction on an alternative route to D606, a regional road just under two miles outside of Auxerre, in central France, salvage archaeologists unearthed a sprawling Roman villa complete with a stately garden, a fountain and an elaborate system of underfloor heating known as a hypocaust, according to a statement from the French National Institute for Preventive Archaeological Research.
    While researchers have been aware of the ruins on the outskirts of the Gallo-Roman settlement of Autissiodorumsince the 19th century, previous excavations have been limited. The most recent dig, in 1966, found a 7,500-square-foot building with ten rooms and amenities that suggested its residents enjoyed great wealth and regional power.

    The site of Sainte-Nitasse, adjacent to a regional highway

    Ch. Fouquin / INRAP

    But until now, the true scale of the villa known as Sainte-Nitasse and its surrounding agricultural estates along the River Yonne was unclear. Archaeologists at INRAP have since discovered a 43,000-square-foot building thought to date to between the first and third centuries C.E. It suggests a previously unimagined level of grandeur.
    INRAP identifies the site as one of the “grand villas of Roman Gaul,” according to the statement. Grand villas are typified by their vast dimensions and sophisticated architectural style. They typically encompass both agricultural and residential portions, known in Latin as pars rustica and pars urbana, respectively. In the pars urbana, grand villas tend to feature stately construction materials like marble; extensive mosaics and frescoes; and amenities like private baths, fountains and gardens.
    So far, the excavations at Sainte-Nitasse have revealed all these features and more.
    The villa’s development is extensive. A 4,800-square-foot garden is enclosed by a fountain to the south and a water basin, or an ornamental pond, to the north. The hypocaust, an ancient system of central heating that circulated hot air beneath the floors of the house, signals a level of luxury atypical for rural estates in Roman Gaul.

    A section of the villa's hypocaust heating system, which circulated hot air beneath the floor

    Ch. Fouquin / INRAP

    “We can imagine it as an ‘aristocratic’ villa, belonging to someone with riches, responsibilities—perhaps municipal, given the proximity to Auxerre—a landowner who had staff on site,” Alexandre Burgevin, the archaeologist in charge of the excavations with INRAP, tells France Info’s Lisa Guyenne.
    Near the banks of the Yonne, a thermal bath site contains several pools where the landowner and his family bathed. On the other side of the garden, workers toiled in the fields of a massive agricultural estate.
    Aside from its size and amenities, the villa’s level of preservation also astounded archaeologists. “For a rural site, it’s quite exceptional,” Burgevin tells L’Yonne Républicaine’s Titouan Stücker. “You can walk on floors from the time period, circulate between rooms like the Gallo-Romans did.”Over time, Autissiodorum grew to become a major city along the Via Agrippa, eventually earning the honor of serving as a provincial Roman capital by the fourth century C.E. As Gaul began slipping away from the Roman Empire around the same time, the prominence of the city fluctuated. INRAP archaeologists speculate that the site was repurposed during medieval times, around the 13th century.
    Burgevin offers several explanations for why the site remained so well preserved in subsequent centuries. The humid conditions along the banks of the river might have prevented excess decay. Since this portion of the River Yonne wasn’t canalized until the 19th century, engineers may have already been aware of the presence of ruins. Or, perhaps the rubble of the villa created “bumpy,” intractable soil that was “not easy to pass over with a tractor,” he tells France Info.
    While the site will briefly open to the public on June 15 for European Archaeology Days, an annual event held at sites across the continent, excavations will continue until September, at which time construction on the road will resume. Much work is to be done, including filling in large gaps of the site’s chronology between the Roman and medieval eras.
    “We have well-built walls but few objects,” says Burgevin, per L’Yonne Républicaine. “It will be necessary to continue digging to understand better.”

    Get the latest stories in your inbox every weekday.
    #archaeologists #stumble #onto #sprawling #ancient
    Archaeologists Stumble Onto Sprawling Ancient Roman Villa During Construction of a Road in France
    Cool Finds Archaeologists Stumble Onto Sprawling Ancient Roman Villa During Construction of a Road in France Located near Auxerre, the grand estate once possessed an exorbitant level of wealth, with thermal baths and heated floors Aerial view of the villa, with thermal baths at the bottom right, the garden and fountain in the center, and the agricultural fields expanding to the left Ch. Fouquin / INRAP In ancient times, all roads led to Rome—or so the saying goes. Nowadays, new roads can lead to Roman ruins. During construction on an alternative route to D606, a regional road just under two miles outside of Auxerre, in central France, salvage archaeologists unearthed a sprawling Roman villa complete with a stately garden, a fountain and an elaborate system of underfloor heating known as a hypocaust, according to a statement from the French National Institute for Preventive Archaeological Research. While researchers have been aware of the ruins on the outskirts of the Gallo-Roman settlement of Autissiodorumsince the 19th century, previous excavations have been limited. The most recent dig, in 1966, found a 7,500-square-foot building with ten rooms and amenities that suggested its residents enjoyed great wealth and regional power. The site of Sainte-Nitasse, adjacent to a regional highway Ch. Fouquin / INRAP But until now, the true scale of the villa known as Sainte-Nitasse and its surrounding agricultural estates along the River Yonne was unclear. Archaeologists at INRAP have since discovered a 43,000-square-foot building thought to date to between the first and third centuries C.E. It suggests a previously unimagined level of grandeur. INRAP identifies the site as one of the “grand villas of Roman Gaul,” according to the statement. Grand villas are typified by their vast dimensions and sophisticated architectural style. They typically encompass both agricultural and residential portions, known in Latin as pars rustica and pars urbana, respectively. In the pars urbana, grand villas tend to feature stately construction materials like marble; extensive mosaics and frescoes; and amenities like private baths, fountains and gardens. So far, the excavations at Sainte-Nitasse have revealed all these features and more. The villa’s development is extensive. A 4,800-square-foot garden is enclosed by a fountain to the south and a water basin, or an ornamental pond, to the north. The hypocaust, an ancient system of central heating that circulated hot air beneath the floors of the house, signals a level of luxury atypical for rural estates in Roman Gaul. A section of the villa's hypocaust heating system, which circulated hot air beneath the floor Ch. Fouquin / INRAP “We can imagine it as an ‘aristocratic’ villa, belonging to someone with riches, responsibilities—perhaps municipal, given the proximity to Auxerre—a landowner who had staff on site,” Alexandre Burgevin, the archaeologist in charge of the excavations with INRAP, tells France Info’s Lisa Guyenne. Near the banks of the Yonne, a thermal bath site contains several pools where the landowner and his family bathed. On the other side of the garden, workers toiled in the fields of a massive agricultural estate. Aside from its size and amenities, the villa’s level of preservation also astounded archaeologists. “For a rural site, it’s quite exceptional,” Burgevin tells L’Yonne Républicaine’s Titouan Stücker. “You can walk on floors from the time period, circulate between rooms like the Gallo-Romans did.”Over time, Autissiodorum grew to become a major city along the Via Agrippa, eventually earning the honor of serving as a provincial Roman capital by the fourth century C.E. As Gaul began slipping away from the Roman Empire around the same time, the prominence of the city fluctuated. INRAP archaeologists speculate that the site was repurposed during medieval times, around the 13th century. Burgevin offers several explanations for why the site remained so well preserved in subsequent centuries. The humid conditions along the banks of the river might have prevented excess decay. Since this portion of the River Yonne wasn’t canalized until the 19th century, engineers may have already been aware of the presence of ruins. Or, perhaps the rubble of the villa created “bumpy,” intractable soil that was “not easy to pass over with a tractor,” he tells France Info. While the site will briefly open to the public on June 15 for European Archaeology Days, an annual event held at sites across the continent, excavations will continue until September, at which time construction on the road will resume. Much work is to be done, including filling in large gaps of the site’s chronology between the Roman and medieval eras. “We have well-built walls but few objects,” says Burgevin, per L’Yonne Républicaine. “It will be necessary to continue digging to understand better.” Get the latest stories in your inbox every weekday. #archaeologists #stumble #onto #sprawling #ancient
    WWW.SMITHSONIANMAG.COM
    Archaeologists Stumble Onto Sprawling Ancient Roman Villa During Construction of a Road in France
    Cool Finds Archaeologists Stumble Onto Sprawling Ancient Roman Villa During Construction of a Road in France Located near Auxerre, the grand estate once possessed an exorbitant level of wealth, with thermal baths and heated floors Aerial view of the villa, with thermal baths at the bottom right, the garden and fountain in the center, and the agricultural fields expanding to the left Ch. Fouquin / INRAP In ancient times, all roads led to Rome—or so the saying goes. Nowadays, new roads can lead to Roman ruins. During construction on an alternative route to D606, a regional road just under two miles outside of Auxerre, in central France, salvage archaeologists unearthed a sprawling Roman villa complete with a stately garden, a fountain and an elaborate system of underfloor heating known as a hypocaust, according to a statement from the French National Institute for Preventive Archaeological Research (INRAP). While researchers have been aware of the ruins on the outskirts of the Gallo-Roman settlement of Autissiodorum (as Auxerre was once known) since the 19th century, previous excavations have been limited. The most recent dig, in 1966, found a 7,500-square-foot building with ten rooms and amenities that suggested its residents enjoyed great wealth and regional power. The site of Sainte-Nitasse, adjacent to a regional highway Ch. Fouquin / INRAP But until now, the true scale of the villa known as Sainte-Nitasse and its surrounding agricultural estates along the River Yonne was unclear. Archaeologists at INRAP have since discovered a 43,000-square-foot building thought to date to between the first and third centuries C.E. It suggests a previously unimagined level of grandeur. INRAP identifies the site as one of the “grand villas of Roman Gaul,” according to the statement. Grand villas are typified by their vast dimensions and sophisticated architectural style. They typically encompass both agricultural and residential portions, known in Latin as pars rustica and pars urbana, respectively. In the pars urbana, grand villas tend to feature stately construction materials like marble; extensive mosaics and frescoes; and amenities like private baths, fountains and gardens. So far, the excavations at Sainte-Nitasse have revealed all these features and more. The villa’s development is extensive. A 4,800-square-foot garden is enclosed by a fountain to the south and a water basin, or an ornamental pond, to the north. The hypocaust, an ancient system of central heating that circulated hot air beneath the floors of the house, signals a level of luxury atypical for rural estates in Roman Gaul. A section of the villa's hypocaust heating system, which circulated hot air beneath the floor Ch. Fouquin / INRAP “We can imagine it as an ‘aristocratic’ villa, belonging to someone with riches, responsibilities—perhaps municipal, given the proximity to Auxerre—a landowner who had staff on site,” Alexandre Burgevin, the archaeologist in charge of the excavations with INRAP, tells France Info’s Lisa Guyenne. Near the banks of the Yonne, a thermal bath site contains several pools where the landowner and his family bathed. On the other side of the garden, workers toiled in the fields of a massive agricultural estate. Aside from its size and amenities, the villa’s level of preservation also astounded archaeologists. “For a rural site, it’s quite exceptional,” Burgevin tells L’Yonne Républicaine’s Titouan Stücker. “You can walk on floors from the time period, circulate between rooms like the Gallo-Romans did.”Over time, Autissiodorum grew to become a major city along the Via Agrippa, eventually earning the honor of serving as a provincial Roman capital by the fourth century C.E. As Gaul began slipping away from the Roman Empire around the same time, the prominence of the city fluctuated. INRAP archaeologists speculate that the site was repurposed during medieval times, around the 13th century. Burgevin offers several explanations for why the site remained so well preserved in subsequent centuries. The humid conditions along the banks of the river might have prevented excess decay. Since this portion of the River Yonne wasn’t canalized until the 19th century, engineers may have already been aware of the presence of ruins. Or, perhaps the rubble of the villa created “bumpy,” intractable soil that was “not easy to pass over with a tractor,” he tells France Info. While the site will briefly open to the public on June 15 for European Archaeology Days, an annual event held at sites across the continent, excavations will continue until September, at which time construction on the road will resume. Much work is to be done, including filling in large gaps of the site’s chronology between the Roman and medieval eras. “We have well-built walls but few objects,” says Burgevin, per L’Yonne Républicaine. “It will be necessary to continue digging to understand better.” Get the latest stories in your inbox every weekday.
    Like
    Love
    Wow
    Sad
    Angry
    509
    2 Commentarii 0 Distribuiri
  • An excerpt from a new book by Sérgio Ferro, published by MACK Books, showcases the architect’s moment of disenchantment

    Last year, MACK Books published Architecture from Below, which anthologized writings by the French Brazilian architect, theorist, and painter Sérgio Ferro.Now, MACK follows with Design and the Building Site and Complementary Essays, the second in the trilogy of books dedicated to Ferro’s scholarship. The following excerpt of the author’s 2023 preface to the English edition, which preserves its British phrasing, captures Ferro’s realization about the working conditions of construction sites in Brasília. The sentiment is likely relatable even today for young architects as they discover how drawings become buildings. Design and the Building Site and Complementary Essays will be released on May 22.

    If I remember correctly, it was in 1958 or 1959, when Rodrigo and I were second- or third year architecture students at FAUUSP, that my father, the real estate developer Armando Simone Pereira, commissioned us to design two large office buildings and eleven shops in Brasilia, which was then under construction. Of course, we were not adequately prepared for such an undertaking. Fortunately, Oscar Niemeyer and his team, who were responsible for overseeing the construction of the capital, had drawn up a detailed document determining the essential characteristics of all the private sector buildings. We followed these prescriptions to the letter, which saved us from disaster.
    Nowadays, it is hard to imagine the degree to which the construction of Brasilia inspired enthusiasm and professional pride in the country’s architects. And in the national imagination, the city’s establishment in the supposedly unpopulated hinterland evoked a re-founding of Brazil. Up until that point, the occupation of our immense territory had been reduced to a collection of arborescent communication routes, generally converging upon some river, following it up to the Atlantic Ocean. Through its ports, agricultural or extractive commodities produced by enslaved peoples or their substitutes passed towards the metropolises; goods were exchanged in the metropolises for more elaborate products, which took the opposite route. Our national identity was summed up in a few symbols, such as the anthem or the flag, and this scattering of paths pointing overseas. Brasilia would radically change this situation, or so we believed. It would create a central hub where the internal communication routes could converge, linking together hithertoseparate junctions, stimulating trade and economic progress in the country’s interior. It was as if, for the first time, we were taking care of ourselves. At the nucleus of this centripetal movement, architecture would embody the renaissance. And at the naval of the nucleus, the symbolic mandala of this utopia: the cathedral.
    Rodrigo and I got caught up in the euphoria. And perhaps more so than our colleagues, because we were taking part in the adventure with ‘our’ designs. The reality was very different — but we did not know that yet.

    At that time, architects in Brazil were responsible for verifying that the construction was in line with the design. We had already monitored some of our first building sites. But the construction company in charge of them, Osmar Souza e Silva’s CENPLA, specialized in the building sites of modernist architects from the so-called Escola Paulista led by Vilanova Artigas. Osmar was very attentive to his clients and his workers, who formed a supportive and helpful team. He was even more careful with us, because he knew how inexperienced we were. I believe that the CENPLA was particularly important in São Paulo modernism: with its congeniality, it facilitated experimentation, but for the same reason, it deceived novices like us about the reality of other building sites.
    Consequently, Rodrigo and I travelled to Brasilia several times to check that the constructions followed ‘our’ designs and to resolve any issues. From the very first trip, our little bubble burst. Our building sites, like all the others in the future capital, bore no relation to Osmar’s. They were more like a branch of hell. A huge, muddy wasteland, in which a few cranes, pile drivers, tractors, and excavators dotted the mound of scaffolding occupied by thousands of skinny, seemingly exhausted wretches, who were nevertheless driven on by the shouts of master builders and foremen, in turn pressured by the imminence of the fateful inauguration date. Surrounding or huddled underneath the marquees of buildings under construction, entire families, equally skeletal and ragged, were waiting for some accident or death to open up a vacancy. In contact only with the master builders, and under close surveillance so we would not speak to the workers, we were not allowed to see what comrades who had worked on these sites later told us in prison: suicide abounded; escape was known to be futile in the unpopulated surroundings with no viable roads; fatal accidents were often caused by weakness due to chronic diarrhoea, brought on by rotten food that came from far away; outright theft took place in the calculation of wages and expenses in the contractor’s grocery store; camps were surrounded by law enforcement.
    I repeat this anecdote yet again not to invoke the benevolence of potential readers, but rather to point out the conditions that, in my opinion, allowed two studentsstill in their professional infancy to quickly adopt positions that were contrary to the usual stance of architects. As the project was more Oscar Niemeyer’s than it was our own, we did not have the same emotional attachment that is understandably engendered between real authors and their designs. We had not yet been imbued with the charm and aura of the métier. And the only building sites we had visited thus far, Osmar’s, were incomparable to those we discovered in Brasilia. In short, our youthfulness and unpreparedness up against an unbearable situation made us react almost immediately to the profession’s satisfied doxa.

    Unprepared and young perhaps, but already with Marx by our side. Rodrigo and I joined the student cell of the Brazilian Communist Party during our first year at university. In itself, this did not help us much: the Party’s Marxism, revised in the interests of the USSR, was pitiful. Even high-level leaders rarely went beyond the first chapter of Capital. But at the end of the 1950s, the effervescence of the years to come was already nascent: this extraordinary revivalthe rediscovery of Marxism and the great dialectical texts and traditions in the 1960s: an excitement that identifies a forgotten or repressed moment of the past as the new and subversive, and learns the dialectical grammar of a Hegel or an Adorno, a Marx or a Lukács, like a foreign language that has resources unavailable in our own.
    And what is more: the Chinese and Cuban revolutions, the war in Vietnam, guerrilla warfare of all kinds, national liberation movements, and a rare libertarian disposition in contemporary history, totally averse to fanaticism and respect for ideological apparatuses ofstate or institution. Going against the grain was almost the norm. We were of course no more than contemporaries of our time. We were soon able to position ourselves from chapters 13, 14, and 15 of Capital, but only because we could constantly cross-reference Marx with our observations from well-contrasted building sites and do our own experimenting. As soon as we identified construction as manufacture, for example, thanks to the willingness and even encouragement of two friends and clients, Boris Fausto and Bernardo Issler, I was able to test both types of manufacture — organic and heterogeneous — on similar-sized projects taking place simultaneously, in order to find out which would be most convenient for the situation in Brazil, particularly in São Paulo. Despite the scientific shortcomings of these tests, they sufficed for us to select organic manufacture. Arquitetura Nova had defined its line of practice, studies, and research.
    There were other sources that were central to our theory and practice. Flávio Império was one of the founders of the Teatro de Arena, undoubtedly the vanguard of popular, militant theatre in Brazil. He won practically every set design award. He brought us his marvelous findings in spatial condensation and malleability, and in the creative diversion of techniques and material—appropriate devices for an underdeveloped country. This is what helped us pave the way to reformulating the reigning design paradigms. 

    We had to do what Flávio had done in the theatre: thoroughly rethink how to be an architect. Upend the perspective. The way we were taught was to start from a desired result; then others would take care of getting there, no matter how. We, on the other hand, set out to go down to the building site and accompany those carrying out the labor itself, those who actually build, the formally subsumed workers in manufacture who are increasingly deprived of the knowledge and know-how presupposed by this kind of subsumption. We should have been fostering the reconstitution of this knowledge and know-how—not so as to fulfil this assumption, but in order to reinvigorate the other side of this assumption according to Marx: the historical rebellion of the manufacture worker, especially the construction worker. We had to rekindle the demand that fueled this rebellion: total self-determination, and not just that of the manual operation as such. Our aim was above all political and ethical. Aesthetics only mattered by way of what it included—ethics. Instead of estética, we wrote est ética. We wanted to make building sites into nests for the return of revolutionary syndicalism, which we ourselves had yet to discover.
    Sérgio Ferro, born in Brazil in 1938, studied architecture at FAUUSP, São Paulo. In the 1960s, he joined the Brazilian communist party and started, along with Rodrigo Lefevre and Flávio Império, the collective known as Arquitetura Nova. After being arrested by the military dictatorship that took power in Brazil in 1964, he moved to France as an exile. As a painter and a professor at the École Nationale Supérieure d’Architecture de Grenoble, where he founded the Dessin/Chantier laboratory, he engaged in extensive research which resulted in several publications, exhibitions, and awards in Brazil and in France, including the title of Chevalier des Arts et des Lettres in 1992. Following his retirement from teaching, Ferro continues to research, write, and paint.
    #excerpt #new #book #sérgio #ferro
    An excerpt from a new book by Sérgio Ferro, published by MACK Books, showcases the architect’s moment of disenchantment
    Last year, MACK Books published Architecture from Below, which anthologized writings by the French Brazilian architect, theorist, and painter Sérgio Ferro.Now, MACK follows with Design and the Building Site and Complementary Essays, the second in the trilogy of books dedicated to Ferro’s scholarship. The following excerpt of the author’s 2023 preface to the English edition, which preserves its British phrasing, captures Ferro’s realization about the working conditions of construction sites in Brasília. The sentiment is likely relatable even today for young architects as they discover how drawings become buildings. Design and the Building Site and Complementary Essays will be released on May 22. If I remember correctly, it was in 1958 or 1959, when Rodrigo and I were second- or third year architecture students at FAUUSP, that my father, the real estate developer Armando Simone Pereira, commissioned us to design two large office buildings and eleven shops in Brasilia, which was then under construction. Of course, we were not adequately prepared for such an undertaking. Fortunately, Oscar Niemeyer and his team, who were responsible for overseeing the construction of the capital, had drawn up a detailed document determining the essential characteristics of all the private sector buildings. We followed these prescriptions to the letter, which saved us from disaster. Nowadays, it is hard to imagine the degree to which the construction of Brasilia inspired enthusiasm and professional pride in the country’s architects. And in the national imagination, the city’s establishment in the supposedly unpopulated hinterland evoked a re-founding of Brazil. Up until that point, the occupation of our immense territory had been reduced to a collection of arborescent communication routes, generally converging upon some river, following it up to the Atlantic Ocean. Through its ports, agricultural or extractive commodities produced by enslaved peoples or their substitutes passed towards the metropolises; goods were exchanged in the metropolises for more elaborate products, which took the opposite route. Our national identity was summed up in a few symbols, such as the anthem or the flag, and this scattering of paths pointing overseas. Brasilia would radically change this situation, or so we believed. It would create a central hub where the internal communication routes could converge, linking together hithertoseparate junctions, stimulating trade and economic progress in the country’s interior. It was as if, for the first time, we were taking care of ourselves. At the nucleus of this centripetal movement, architecture would embody the renaissance. And at the naval of the nucleus, the symbolic mandala of this utopia: the cathedral. Rodrigo and I got caught up in the euphoria. And perhaps more so than our colleagues, because we were taking part in the adventure with ‘our’ designs. The reality was very different — but we did not know that yet. At that time, architects in Brazil were responsible for verifying that the construction was in line with the design. We had already monitored some of our first building sites. But the construction company in charge of them, Osmar Souza e Silva’s CENPLA, specialized in the building sites of modernist architects from the so-called Escola Paulista led by Vilanova Artigas. Osmar was very attentive to his clients and his workers, who formed a supportive and helpful team. He was even more careful with us, because he knew how inexperienced we were. I believe that the CENPLA was particularly important in São Paulo modernism: with its congeniality, it facilitated experimentation, but for the same reason, it deceived novices like us about the reality of other building sites. Consequently, Rodrigo and I travelled to Brasilia several times to check that the constructions followed ‘our’ designs and to resolve any issues. From the very first trip, our little bubble burst. Our building sites, like all the others in the future capital, bore no relation to Osmar’s. They were more like a branch of hell. A huge, muddy wasteland, in which a few cranes, pile drivers, tractors, and excavators dotted the mound of scaffolding occupied by thousands of skinny, seemingly exhausted wretches, who were nevertheless driven on by the shouts of master builders and foremen, in turn pressured by the imminence of the fateful inauguration date. Surrounding or huddled underneath the marquees of buildings under construction, entire families, equally skeletal and ragged, were waiting for some accident or death to open up a vacancy. In contact only with the master builders, and under close surveillance so we would not speak to the workers, we were not allowed to see what comrades who had worked on these sites later told us in prison: suicide abounded; escape was known to be futile in the unpopulated surroundings with no viable roads; fatal accidents were often caused by weakness due to chronic diarrhoea, brought on by rotten food that came from far away; outright theft took place in the calculation of wages and expenses in the contractor’s grocery store; camps were surrounded by law enforcement. I repeat this anecdote yet again not to invoke the benevolence of potential readers, but rather to point out the conditions that, in my opinion, allowed two studentsstill in their professional infancy to quickly adopt positions that were contrary to the usual stance of architects. As the project was more Oscar Niemeyer’s than it was our own, we did not have the same emotional attachment that is understandably engendered between real authors and their designs. We had not yet been imbued with the charm and aura of the métier. And the only building sites we had visited thus far, Osmar’s, were incomparable to those we discovered in Brasilia. In short, our youthfulness and unpreparedness up against an unbearable situation made us react almost immediately to the profession’s satisfied doxa. Unprepared and young perhaps, but already with Marx by our side. Rodrigo and I joined the student cell of the Brazilian Communist Party during our first year at university. In itself, this did not help us much: the Party’s Marxism, revised in the interests of the USSR, was pitiful. Even high-level leaders rarely went beyond the first chapter of Capital. But at the end of the 1950s, the effervescence of the years to come was already nascent: this extraordinary revivalthe rediscovery of Marxism and the great dialectical texts and traditions in the 1960s: an excitement that identifies a forgotten or repressed moment of the past as the new and subversive, and learns the dialectical grammar of a Hegel or an Adorno, a Marx or a Lukács, like a foreign language that has resources unavailable in our own. And what is more: the Chinese and Cuban revolutions, the war in Vietnam, guerrilla warfare of all kinds, national liberation movements, and a rare libertarian disposition in contemporary history, totally averse to fanaticism and respect for ideological apparatuses ofstate or institution. Going against the grain was almost the norm. We were of course no more than contemporaries of our time. We were soon able to position ourselves from chapters 13, 14, and 15 of Capital, but only because we could constantly cross-reference Marx with our observations from well-contrasted building sites and do our own experimenting. As soon as we identified construction as manufacture, for example, thanks to the willingness and even encouragement of two friends and clients, Boris Fausto and Bernardo Issler, I was able to test both types of manufacture — organic and heterogeneous — on similar-sized projects taking place simultaneously, in order to find out which would be most convenient for the situation in Brazil, particularly in São Paulo. Despite the scientific shortcomings of these tests, they sufficed for us to select organic manufacture. Arquitetura Nova had defined its line of practice, studies, and research. There were other sources that were central to our theory and practice. Flávio Império was one of the founders of the Teatro de Arena, undoubtedly the vanguard of popular, militant theatre in Brazil. He won practically every set design award. He brought us his marvelous findings in spatial condensation and malleability, and in the creative diversion of techniques and material—appropriate devices for an underdeveloped country. This is what helped us pave the way to reformulating the reigning design paradigms.  We had to do what Flávio had done in the theatre: thoroughly rethink how to be an architect. Upend the perspective. The way we were taught was to start from a desired result; then others would take care of getting there, no matter how. We, on the other hand, set out to go down to the building site and accompany those carrying out the labor itself, those who actually build, the formally subsumed workers in manufacture who are increasingly deprived of the knowledge and know-how presupposed by this kind of subsumption. We should have been fostering the reconstitution of this knowledge and know-how—not so as to fulfil this assumption, but in order to reinvigorate the other side of this assumption according to Marx: the historical rebellion of the manufacture worker, especially the construction worker. We had to rekindle the demand that fueled this rebellion: total self-determination, and not just that of the manual operation as such. Our aim was above all political and ethical. Aesthetics only mattered by way of what it included—ethics. Instead of estética, we wrote est ética. We wanted to make building sites into nests for the return of revolutionary syndicalism, which we ourselves had yet to discover. Sérgio Ferro, born in Brazil in 1938, studied architecture at FAUUSP, São Paulo. In the 1960s, he joined the Brazilian communist party and started, along with Rodrigo Lefevre and Flávio Império, the collective known as Arquitetura Nova. After being arrested by the military dictatorship that took power in Brazil in 1964, he moved to France as an exile. As a painter and a professor at the École Nationale Supérieure d’Architecture de Grenoble, where he founded the Dessin/Chantier laboratory, he engaged in extensive research which resulted in several publications, exhibitions, and awards in Brazil and in France, including the title of Chevalier des Arts et des Lettres in 1992. Following his retirement from teaching, Ferro continues to research, write, and paint. #excerpt #new #book #sérgio #ferro
    An excerpt from a new book by Sérgio Ferro, published by MACK Books, showcases the architect’s moment of disenchantment
    Last year, MACK Books published Architecture from Below, which anthologized writings by the French Brazilian architect, theorist, and painter Sérgio Ferro. (Douglas Spencer reviewed it for AN.) Now, MACK follows with Design and the Building Site and Complementary Essays, the second in the trilogy of books dedicated to Ferro’s scholarship. The following excerpt of the author’s 2023 preface to the English edition, which preserves its British phrasing, captures Ferro’s realization about the working conditions of construction sites in Brasília. The sentiment is likely relatable even today for young architects as they discover how drawings become buildings. Design and the Building Site and Complementary Essays will be released on May 22. If I remember correctly, it was in 1958 or 1959, when Rodrigo and I were second- or third year architecture students at FAUUSP, that my father, the real estate developer Armando Simone Pereira, commissioned us to design two large office buildings and eleven shops in Brasilia, which was then under construction. Of course, we were not adequately prepared for such an undertaking. Fortunately, Oscar Niemeyer and his team, who were responsible for overseeing the construction of the capital, had drawn up a detailed document determining the essential characteristics of all the private sector buildings. We followed these prescriptions to the letter, which saved us from disaster. Nowadays, it is hard to imagine the degree to which the construction of Brasilia inspired enthusiasm and professional pride in the country’s architects. And in the national imagination, the city’s establishment in the supposedly unpopulated hinterland evoked a re-founding of Brazil. Up until that point, the occupation of our immense territory had been reduced to a collection of arborescent communication routes, generally converging upon some river, following it up to the Atlantic Ocean. Through its ports, agricultural or extractive commodities produced by enslaved peoples or their substitutes passed towards the metropolises; goods were exchanged in the metropolises for more elaborate products, which took the opposite route. Our national identity was summed up in a few symbols, such as the anthem or the flag, and this scattering of paths pointing overseas. Brasilia would radically change this situation, or so we believed. It would create a central hub where the internal communication routes could converge, linking together hithertoseparate junctions, stimulating trade and economic progress in the country’s interior. It was as if, for the first time, we were taking care of ourselves. At the nucleus of this centripetal movement, architecture would embody the renaissance. And at the naval of the nucleus, the symbolic mandala of this utopia: the cathedral. Rodrigo and I got caught up in the euphoria. And perhaps more so than our colleagues, because we were taking part in the adventure with ‘our’ designs. The reality was very different — but we did not know that yet. At that time, architects in Brazil were responsible for verifying that the construction was in line with the design. We had already monitored some of our first building sites. But the construction company in charge of them, Osmar Souza e Silva’s CENPLA, specialized in the building sites of modernist architects from the so-called Escola Paulista led by Vilanova Artigas (which we aspired to be a part of, like the pretentious students we were). Osmar was very attentive to his clients and his workers, who formed a supportive and helpful team. He was even more careful with us, because he knew how inexperienced we were. I believe that the CENPLA was particularly important in São Paulo modernism: with its congeniality, it facilitated experimentation, but for the same reason, it deceived novices like us about the reality of other building sites. Consequently, Rodrigo and I travelled to Brasilia several times to check that the constructions followed ‘our’ designs and to resolve any issues. From the very first trip, our little bubble burst. Our building sites, like all the others in the future capital, bore no relation to Osmar’s. They were more like a branch of hell. A huge, muddy wasteland, in which a few cranes, pile drivers, tractors, and excavators dotted the mound of scaffolding occupied by thousands of skinny, seemingly exhausted wretches, who were nevertheless driven on by the shouts of master builders and foremen, in turn pressured by the imminence of the fateful inauguration date. Surrounding or huddled underneath the marquees of buildings under construction, entire families, equally skeletal and ragged, were waiting for some accident or death to open up a vacancy. In contact only with the master builders, and under close surveillance so we would not speak to the workers, we were not allowed to see what comrades who had worked on these sites later told us in prison: suicide abounded; escape was known to be futile in the unpopulated surroundings with no viable roads; fatal accidents were often caused by weakness due to chronic diarrhoea, brought on by rotten food that came from far away; outright theft took place in the calculation of wages and expenses in the contractor’s grocery store; camps were surrounded by law enforcement. I repeat this anecdote yet again not to invoke the benevolence of potential readers, but rather to point out the conditions that, in my opinion, allowed two students (Flávio Império joined us a little later) still in their professional infancy to quickly adopt positions that were contrary to the usual stance of architects. As the project was more Oscar Niemeyer’s than it was our own, we did not have the same emotional attachment that is understandably engendered between real authors and their designs. We had not yet been imbued with the charm and aura of the métier. And the only building sites we had visited thus far, Osmar’s, were incomparable to those we discovered in Brasilia. In short, our youthfulness and unpreparedness up against an unbearable situation made us react almost immediately to the profession’s satisfied doxa. Unprepared and young perhaps, but already with Marx by our side. Rodrigo and I joined the student cell of the Brazilian Communist Party during our first year at university. In itself, this did not help us much: the Party’s Marxism, revised in the interests of the USSR, was pitiful. Even high-level leaders rarely went beyond the first chapter of Capital. But at the end of the 1950s, the effervescence of the years to come was already nascent:  […] this extraordinary revival […] the rediscovery of Marxism and the great dialectical texts and traditions in the 1960s: an excitement that identifies a forgotten or repressed moment of the past as the new and subversive, and learns the dialectical grammar of a Hegel or an Adorno, a Marx or a Lukács, like a foreign language that has resources unavailable in our own. And what is more: the Chinese and Cuban revolutions, the war in Vietnam, guerrilla warfare of all kinds, national liberation movements, and a rare libertarian disposition in contemporary history, totally averse to fanaticism and respect for ideological apparatuses of (any) state or institution. Going against the grain was almost the norm. We were of course no more than contemporaries of our time. We were soon able to position ourselves from chapters 13, 14, and 15 of Capital, but only because we could constantly cross-reference Marx with our observations from well-contrasted building sites and do our own experimenting. As soon as we identified construction as manufacture, for example, thanks to the willingness and even encouragement of two friends and clients, Boris Fausto and Bernardo Issler, I was able to test both types of manufacture — organic and heterogeneous — on similar-sized projects taking place simultaneously, in order to find out which would be most convenient for the situation in Brazil, particularly in São Paulo. Despite the scientific shortcomings of these tests, they sufficed for us to select organic manufacture. Arquitetura Nova had defined its line of practice, studies, and research. There were other sources that were central to our theory and practice. Flávio Império was one of the founders of the Teatro de Arena, undoubtedly the vanguard of popular, militant theatre in Brazil. He won practically every set design award. He brought us his marvelous findings in spatial condensation and malleability, and in the creative diversion of techniques and material—appropriate devices for an underdeveloped country. This is what helped us pave the way to reformulating the reigning design paradigms.  We had to do what Flávio had done in the theatre: thoroughly rethink how to be an architect. Upend the perspective. The way we were taught was to start from a desired result; then others would take care of getting there, no matter how. We, on the other hand, set out to go down to the building site and accompany those carrying out the labor itself, those who actually build, the formally subsumed workers in manufacture who are increasingly deprived of the knowledge and know-how presupposed by this kind of subsumption. We should have been fostering the reconstitution of this knowledge and know-how—not so as to fulfil this assumption, but in order to reinvigorate the other side of this assumption according to Marx: the historical rebellion of the manufacture worker, especially the construction worker. We had to rekindle the demand that fueled this rebellion: total self-determination, and not just that of the manual operation as such. Our aim was above all political and ethical. Aesthetics only mattered by way of what it included—ethics. Instead of estética, we wrote est ética [this is ethics]. We wanted to make building sites into nests for the return of revolutionary syndicalism, which we ourselves had yet to discover. Sérgio Ferro, born in Brazil in 1938, studied architecture at FAUUSP, São Paulo. In the 1960s, he joined the Brazilian communist party and started, along with Rodrigo Lefevre and Flávio Império, the collective known as Arquitetura Nova. After being arrested by the military dictatorship that took power in Brazil in 1964, he moved to France as an exile. As a painter and a professor at the École Nationale Supérieure d’Architecture de Grenoble, where he founded the Dessin/Chantier laboratory, he engaged in extensive research which resulted in several publications, exhibitions, and awards in Brazil and in France, including the title of Chevalier des Arts et des Lettres in 1992. Following his retirement from teaching, Ferro continues to research, write, and paint.
    0 Commentarii 0 Distribuiri
  • Why Companies Need to Reimagine Their AI Approach

    Ivy Grant, SVP of Strategy & Operations, Twilio June 13, 20255 Min Readpeshkova via alamy stockAsk technologists and enterprise leaders what they hope AI will deliver, and most will land on some iteration of the "T" word: transformation. No surprise, AI and its “cooler than you” cousin, generative AI, have been hyped nonstop for the past 24 months. But therein lies the problem. Many organizations are rushing to implement AI without a grasp on the return on investment, leading to high spend and low impact. Without anchoring AI to clear friction points and acceleration opportunities, companies invite fatigue, anxiety and competitive risk. Two-thirds of C-suite execs say GenAI has created tension and division within their organizations; nearly half say it’s “tearing their company apart.” Mostreport adoption challenges; more than a third call it a massive disappointment. While AI's potential is irrefutable, companies need to reject the narrative of AI as a standalone strategy or transformational savior. Its true power is as a catalyst to amplify what already works and surface what could. Here are three principles to make that happen. 1. Start with friction, not function Many enterprises struggle with where to start when integrating AI. My advice: Start where the pain is greatest. Identify the processes that create the most friction and work backward from there. AI is a tool, not a solution. By mapping real pain points to AI use cases, you can hone investments to the ripest fruit rather than simply where it hangs at the lowest. Related:For example, one of our top sources of customer pain was troubleshooting undeliverable messages, which forced users to sift through error code documentation. To solve this, an AI assistant was introduced to detect anomalies, explain causes in natural language, and guide customers toward resolution. We achieved a 97% real-time resolution rate through a blend of conversational AI and live support. Most companies have long-standing friction points that support teams routinely explain. Or that you’ve developed organizational calluses over; problems considered “just the cost of doing business.” GenAI allows leaders to revisit these areas and reimagine what’s possible. 2. The need forspeed We hear stories of leaders pushing an “all or nothing” version of AI transformation: Use AI to cut functional headcount or die. Rather than leading with a “stick” through wholesale transformation mandates or threats to budgets, we must recognize AI implementation as a fundamental culture change. Just as you wouldn't expect to transform your company culture overnight by edict, it's unreasonable to expect something different from your AI transformation. Related:Some leaders have a tendency to move faster than the innovation ability or comfort level of their people. Most functional leads aren’t obstinate in their slow adoption of AI tools, their long-held beliefs to run a process or to assess risks. We hired these leaders for their decades of experience in “what good looks like” and deep expertise in incremental improvements; then we expect them to suddenly define a futuristic vision that challenges their own beliefs. As executive leaders, we must give grace, space and plenty of “carrots” -- incentives, training, and support resources -- to help them reimagine complex workflows with AI. And, we must recognize that AI has the ability to make progress in ways that may not immediately create cost efficiencies, such as for operational improvements that require data cleansing, deep analytics, forecasting, dynamic pricing, and signal sensing. These aren’t the sexy parts of AI, but they’re the types of issues that require superhuman intelligence and complex problem-solving that AI was made for. 3. A flywheel of acceleration The other transformation that AI should support is creating faster and broader “test and learn” cycles. AI implementation is not a linear process with start here and end there. Organizations that want to leverage AI as a competitive advantage should establish use cases where AI can break down company silos and act as a catalyst to identify the next opportunity. That identifies the next as a flywheel of acceleration. This flywheel builds on accumulated learnings, making small successes into larger wins while avoiding costly AI disasters from rushed implementation. Related:For example, at Twilio we are building a customer intelligence platform that analyzes thousands of conversations to identify patterns and drive insights. If we see multiple customers mention a competitor's pricing, it could signal a take-out campaign. What once took weeks to recognize and escalate can now be done in near real-time and used for highly coordinated activations across marketing, product, sales, and other teams. With every AI acceleration win, we uncover more places to improve hand-offs, activation speed, and business decision-making. That flywheel of innovation is how true AI transformation begins to drive impactful business outcomes. Ideas to Fuel Your AI Strategy Organizations can accelerate their AI implementations through these simple shifts in approach: Revisit your long-standing friction points, both customer-facing and internal, across your organization -- particularly explore the ones you thought were “the cost of doing business” Don’t just look for where AI can reduce manual processes, but find the highly complex problems and start experimenting Support your functional experts with AI-driven training, resources, tools, and incentives to help them challenge their long-held beliefs about what works for the future Treat AI implementation as a cultural change that requires time, experimentation, learning, and carrots Recognize that transformation starts with a flywheel of acceleration, where each new experiment can lead to the next big discovery The most impactful AI implementations don’t rush transformation; they strategically accelerate core capabilities and unlock new ones to drive measurable change. About the AuthorIvy GrantSVP of Strategy & Operations, Twilio Ivy Grant is Senior Vice President of Strategy & Operations at Twilio where she leads strategic planning, enterprise analytics, M&A Integration and is responsible for driving transformational initiatives that enable Twilio to continuously improve its operations. Prior to Twilio, Ivy’s career has balanced senior roles in strategy consulting at McKinsey & Company, Edelman and PwC with customer-centric operational roles at Walmart, Polo Ralph Lauren and tech startup Eversight Labs. She loves solo international travel, hugging exotic animals and boxing. Ivy has an MBA from NYU’s Stern School of Business and a BS in Applied Economics from Cornell University. See more from Ivy GrantReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    #why #companies #need #reimagine #their
    Why Companies Need to Reimagine Their AI Approach
    Ivy Grant, SVP of Strategy & Operations, Twilio June 13, 20255 Min Readpeshkova via alamy stockAsk technologists and enterprise leaders what they hope AI will deliver, and most will land on some iteration of the "T" word: transformation. No surprise, AI and its “cooler than you” cousin, generative AI, have been hyped nonstop for the past 24 months. But therein lies the problem. Many organizations are rushing to implement AI without a grasp on the return on investment, leading to high spend and low impact. Without anchoring AI to clear friction points and acceleration opportunities, companies invite fatigue, anxiety and competitive risk. Two-thirds of C-suite execs say GenAI has created tension and division within their organizations; nearly half say it’s “tearing their company apart.” Mostreport adoption challenges; more than a third call it a massive disappointment. While AI's potential is irrefutable, companies need to reject the narrative of AI as a standalone strategy or transformational savior. Its true power is as a catalyst to amplify what already works and surface what could. Here are three principles to make that happen. 1. Start with friction, not function Many enterprises struggle with where to start when integrating AI. My advice: Start where the pain is greatest. Identify the processes that create the most friction and work backward from there. AI is a tool, not a solution. By mapping real pain points to AI use cases, you can hone investments to the ripest fruit rather than simply where it hangs at the lowest. Related:For example, one of our top sources of customer pain was troubleshooting undeliverable messages, which forced users to sift through error code documentation. To solve this, an AI assistant was introduced to detect anomalies, explain causes in natural language, and guide customers toward resolution. We achieved a 97% real-time resolution rate through a blend of conversational AI and live support. Most companies have long-standing friction points that support teams routinely explain. Or that you’ve developed organizational calluses over; problems considered “just the cost of doing business.” GenAI allows leaders to revisit these areas and reimagine what’s possible. 2. The need forspeed We hear stories of leaders pushing an “all or nothing” version of AI transformation: Use AI to cut functional headcount or die. Rather than leading with a “stick” through wholesale transformation mandates or threats to budgets, we must recognize AI implementation as a fundamental culture change. Just as you wouldn't expect to transform your company culture overnight by edict, it's unreasonable to expect something different from your AI transformation. Related:Some leaders have a tendency to move faster than the innovation ability or comfort level of their people. Most functional leads aren’t obstinate in their slow adoption of AI tools, their long-held beliefs to run a process or to assess risks. We hired these leaders for their decades of experience in “what good looks like” and deep expertise in incremental improvements; then we expect them to suddenly define a futuristic vision that challenges their own beliefs. As executive leaders, we must give grace, space and plenty of “carrots” -- incentives, training, and support resources -- to help them reimagine complex workflows with AI. And, we must recognize that AI has the ability to make progress in ways that may not immediately create cost efficiencies, such as for operational improvements that require data cleansing, deep analytics, forecasting, dynamic pricing, and signal sensing. These aren’t the sexy parts of AI, but they’re the types of issues that require superhuman intelligence and complex problem-solving that AI was made for. 3. A flywheel of acceleration The other transformation that AI should support is creating faster and broader “test and learn” cycles. AI implementation is not a linear process with start here and end there. Organizations that want to leverage AI as a competitive advantage should establish use cases where AI can break down company silos and act as a catalyst to identify the next opportunity. That identifies the next as a flywheel of acceleration. This flywheel builds on accumulated learnings, making small successes into larger wins while avoiding costly AI disasters from rushed implementation. Related:For example, at Twilio we are building a customer intelligence platform that analyzes thousands of conversations to identify patterns and drive insights. If we see multiple customers mention a competitor's pricing, it could signal a take-out campaign. What once took weeks to recognize and escalate can now be done in near real-time and used for highly coordinated activations across marketing, product, sales, and other teams. With every AI acceleration win, we uncover more places to improve hand-offs, activation speed, and business decision-making. That flywheel of innovation is how true AI transformation begins to drive impactful business outcomes. Ideas to Fuel Your AI Strategy Organizations can accelerate their AI implementations through these simple shifts in approach: Revisit your long-standing friction points, both customer-facing and internal, across your organization -- particularly explore the ones you thought were “the cost of doing business” Don’t just look for where AI can reduce manual processes, but find the highly complex problems and start experimenting Support your functional experts with AI-driven training, resources, tools, and incentives to help them challenge their long-held beliefs about what works for the future Treat AI implementation as a cultural change that requires time, experimentation, learning, and carrots Recognize that transformation starts with a flywheel of acceleration, where each new experiment can lead to the next big discovery The most impactful AI implementations don’t rush transformation; they strategically accelerate core capabilities and unlock new ones to drive measurable change. About the AuthorIvy GrantSVP of Strategy & Operations, Twilio Ivy Grant is Senior Vice President of Strategy & Operations at Twilio where she leads strategic planning, enterprise analytics, M&A Integration and is responsible for driving transformational initiatives that enable Twilio to continuously improve its operations. Prior to Twilio, Ivy’s career has balanced senior roles in strategy consulting at McKinsey & Company, Edelman and PwC with customer-centric operational roles at Walmart, Polo Ralph Lauren and tech startup Eversight Labs. She loves solo international travel, hugging exotic animals and boxing. Ivy has an MBA from NYU’s Stern School of Business and a BS in Applied Economics from Cornell University. See more from Ivy GrantReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #why #companies #need #reimagine #their
    WWW.INFORMATIONWEEK.COM
    Why Companies Need to Reimagine Their AI Approach
    Ivy Grant, SVP of Strategy & Operations, Twilio June 13, 20255 Min Readpeshkova via alamy stockAsk technologists and enterprise leaders what they hope AI will deliver, and most will land on some iteration of the "T" word: transformation. No surprise, AI and its “cooler than you” cousin, generative AI (GenAI), have been hyped nonstop for the past 24 months. But therein lies the problem. Many organizations are rushing to implement AI without a grasp on the return on investment (ROI), leading to high spend and low impact. Without anchoring AI to clear friction points and acceleration opportunities, companies invite fatigue, anxiety and competitive risk. Two-thirds of C-suite execs say GenAI has created tension and division within their organizations; nearly half say it’s “tearing their company apart.” Most (71%) report adoption challenges; more than a third call it a massive disappointment. While AI's potential is irrefutable, companies need to reject the narrative of AI as a standalone strategy or transformational savior. Its true power is as a catalyst to amplify what already works and surface what could. Here are three principles to make that happen. 1. Start with friction, not function Many enterprises struggle with where to start when integrating AI. My advice: Start where the pain is greatest. Identify the processes that create the most friction and work backward from there. AI is a tool, not a solution. By mapping real pain points to AI use cases, you can hone investments to the ripest fruit rather than simply where it hangs at the lowest. Related:For example, one of our top sources of customer pain was troubleshooting undeliverable messages, which forced users to sift through error code documentation. To solve this, an AI assistant was introduced to detect anomalies, explain causes in natural language, and guide customers toward resolution. We achieved a 97% real-time resolution rate through a blend of conversational AI and live support. Most companies have long-standing friction points that support teams routinely explain. Or that you’ve developed organizational calluses over; problems considered “just the cost of doing business.” GenAI allows leaders to revisit these areas and reimagine what’s possible. 2. The need for (dual) speed We hear stories of leaders pushing an “all or nothing” version of AI transformation: Use AI to cut functional headcount or die. Rather than leading with a “stick” through wholesale transformation mandates or threats to budgets, we must recognize AI implementation as a fundamental culture change. Just as you wouldn't expect to transform your company culture overnight by edict, it's unreasonable to expect something different from your AI transformation. Related:Some leaders have a tendency to move faster than the innovation ability or comfort level of their people. Most functional leads aren’t obstinate in their slow adoption of AI tools, their long-held beliefs to run a process or to assess risks. We hired these leaders for their decades of experience in “what good looks like” and deep expertise in incremental improvements; then we expect them to suddenly define a futuristic vision that challenges their own beliefs. As executive leaders, we must give grace, space and plenty of “carrots” -- incentives, training, and support resources -- to help them reimagine complex workflows with AI. And, we must recognize that AI has the ability to make progress in ways that may not immediately create cost efficiencies, such as for operational improvements that require data cleansing, deep analytics, forecasting, dynamic pricing, and signal sensing. These aren’t the sexy parts of AI, but they’re the types of issues that require superhuman intelligence and complex problem-solving that AI was made for. 3. A flywheel of acceleration The other transformation that AI should support is creating faster and broader “test and learn” cycles. AI implementation is not a linear process with start here and end there. Organizations that want to leverage AI as a competitive advantage should establish use cases where AI can break down company silos and act as a catalyst to identify the next opportunity. That identifies the next as a flywheel of acceleration. This flywheel builds on accumulated learnings, making small successes into larger wins while avoiding costly AI disasters from rushed implementation. Related:For example, at Twilio we are building a customer intelligence platform that analyzes thousands of conversations to identify patterns and drive insights. If we see multiple customers mention a competitor's pricing, it could signal a take-out campaign. What once took weeks to recognize and escalate can now be done in near real-time and used for highly coordinated activations across marketing, product, sales, and other teams. With every AI acceleration win, we uncover more places to improve hand-offs, activation speed, and business decision-making. That flywheel of innovation is how true AI transformation begins to drive impactful business outcomes. Ideas to Fuel Your AI Strategy Organizations can accelerate their AI implementations through these simple shifts in approach: Revisit your long-standing friction points, both customer-facing and internal, across your organization -- particularly explore the ones you thought were “the cost of doing business” Don’t just look for where AI can reduce manual processes, but find the highly complex problems and start experimenting Support your functional experts with AI-driven training, resources, tools, and incentives to help them challenge their long-held beliefs about what works for the future Treat AI implementation as a cultural change that requires time, experimentation, learning, and carrots (not just sticks) Recognize that transformation starts with a flywheel of acceleration, where each new experiment can lead to the next big discovery The most impactful AI implementations don’t rush transformation; they strategically accelerate core capabilities and unlock new ones to drive measurable change. About the AuthorIvy GrantSVP of Strategy & Operations, Twilio Ivy Grant is Senior Vice President of Strategy & Operations at Twilio where she leads strategic planning, enterprise analytics, M&A Integration and is responsible for driving transformational initiatives that enable Twilio to continuously improve its operations. Prior to Twilio, Ivy’s career has balanced senior roles in strategy consulting at McKinsey & Company, Edelman and PwC with customer-centric operational roles at Walmart, Polo Ralph Lauren and tech startup Eversight Labs. She loves solo international travel, hugging exotic animals and boxing. Ivy has an MBA from NYU’s Stern School of Business and a BS in Applied Economics from Cornell University. See more from Ivy GrantReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commentarii 0 Distribuiri
  • Over 269,000 Websites Infected with JSFireTruck JavaScript Malware in One Month

    Jun 13, 2025Ravie LakshmananWeb Security / Network Security

    Cybersecurity researchers are calling attention to a "large-scale campaign" that has been observed compromising legitimate websites with malicious JavaScript injections.
    According to Palo Alto Networks Unit 42, these malicious injects are obfuscated using JSFuck, which refers to an "esoteric and educational programming style" that uses only a limited set of characters to write and execute code.
    The cybersecurity company has given the technique an alternate name JSFireTruck owing to the profanity involved.
    "Multiple websites have been identified with injected malicious JavaScript that uses JSFireTruck obfuscation, which is composed primarily of the symbols, +, {, and }," security researchers Hardik Shah, Brad Duncan, and Pranay Kumar Chhaparwal said. "The code's obfuscation hides its true purpose, hindering analysis."

    Further analysis has determined that the injected code is designed to check the website referrer, which identifies the address of the web page from which a request originated.
    Should the referrer be a search engine such as Google, Bing, DuckDuckGo, Yahoo!, or AOL, the JavaScript code redirects victims to malicious URLs that can deliver malware, exploits, traffic monetization, and malvertising.

    Unit 42 said its telemetry uncovered 269,552 web pages that have been infected with JavaScript code using the JSFireTruck technique between March 26 and April 25, 2025. A spike in the campaign was first recorded on April 12, when over 50,000 infected web pages were observed in a single day.
    "The campaign's scale and stealth pose a significant threat," the researchers said. "The widespread nature of these infections suggests a coordinated effort to compromise legitimate websites as attack vectors for further malicious activities."
    Say Hello to HelloTDS
    The development comes as Gen Digital took the wraps off a sophisticated Traffic Distribution Servicecalled HelloTDS that's designed to conditionally redirect site visitors to fake CAPTCHA pages, tech support scams, fake browser updates, unwanted browser extensions, and cryptocurrency scams through remotely-hosted JavaScript code injected into the sites.
    The primary objective of the TDS is to act as a gateway, determining the exact nature of content to be delivered to the victims after fingerprinting their devices. If the user is not deemed a suitable target, the victim is redirected to a benign web page.

    "The campaign entry points are infected or otherwise attacker-controlled streaming websites, file sharing services, as well as malvertising campaigns," researchers Vojtěch Krejsa and Milan Špinka said in a report published this month.
    "Victims are evaluated based on geolocation, IP address, and browser fingerprinting; for example, connections through VPNs or headless browsers are detected and rejected."
    Some of these attack chains have been found to serve bogus CAPTCHA pages that leverage the ClickFix strategy to trick users into running malicious code and infecting their machines with a malware known as PEAKLIGHT, which is known to server information stealers like Lumma.

    Central to the HelloTDS infrastructure is the use of .top, .shop, and .com top-level domains that are used to host the JavaScript code and trigger the redirections following a multi-stage fingerprinting process engineered to collect network and browser information.
    "The HelloTDS infrastructure behind fake CAPTCHA campaigns demonstrates how attackers continue to refine their methods to bypass traditional protections, evade detection, and selectively target victims," the researchers said.
    "By leveraging sophisticated fingerprinting, dynamic domain infrastructure, and deception tacticsthese campaigns achieve both stealth and scale."

    Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

    SHARE




    #over #websites #infected #with #jsfiretruck
    Over 269,000 Websites Infected with JSFireTruck JavaScript Malware in One Month
    Jun 13, 2025Ravie LakshmananWeb Security / Network Security Cybersecurity researchers are calling attention to a "large-scale campaign" that has been observed compromising legitimate websites with malicious JavaScript injections. According to Palo Alto Networks Unit 42, these malicious injects are obfuscated using JSFuck, which refers to an "esoteric and educational programming style" that uses only a limited set of characters to write and execute code. The cybersecurity company has given the technique an alternate name JSFireTruck owing to the profanity involved. "Multiple websites have been identified with injected malicious JavaScript that uses JSFireTruck obfuscation, which is composed primarily of the symbols, +, {, and }," security researchers Hardik Shah, Brad Duncan, and Pranay Kumar Chhaparwal said. "The code's obfuscation hides its true purpose, hindering analysis." Further analysis has determined that the injected code is designed to check the website referrer, which identifies the address of the web page from which a request originated. Should the referrer be a search engine such as Google, Bing, DuckDuckGo, Yahoo!, or AOL, the JavaScript code redirects victims to malicious URLs that can deliver malware, exploits, traffic monetization, and malvertising. Unit 42 said its telemetry uncovered 269,552 web pages that have been infected with JavaScript code using the JSFireTruck technique between March 26 and April 25, 2025. A spike in the campaign was first recorded on April 12, when over 50,000 infected web pages were observed in a single day. "The campaign's scale and stealth pose a significant threat," the researchers said. "The widespread nature of these infections suggests a coordinated effort to compromise legitimate websites as attack vectors for further malicious activities." Say Hello to HelloTDS The development comes as Gen Digital took the wraps off a sophisticated Traffic Distribution Servicecalled HelloTDS that's designed to conditionally redirect site visitors to fake CAPTCHA pages, tech support scams, fake browser updates, unwanted browser extensions, and cryptocurrency scams through remotely-hosted JavaScript code injected into the sites. The primary objective of the TDS is to act as a gateway, determining the exact nature of content to be delivered to the victims after fingerprinting their devices. If the user is not deemed a suitable target, the victim is redirected to a benign web page. "The campaign entry points are infected or otherwise attacker-controlled streaming websites, file sharing services, as well as malvertising campaigns," researchers Vojtěch Krejsa and Milan Špinka said in a report published this month. "Victims are evaluated based on geolocation, IP address, and browser fingerprinting; for example, connections through VPNs or headless browsers are detected and rejected." Some of these attack chains have been found to serve bogus CAPTCHA pages that leverage the ClickFix strategy to trick users into running malicious code and infecting their machines with a malware known as PEAKLIGHT, which is known to server information stealers like Lumma. Central to the HelloTDS infrastructure is the use of .top, .shop, and .com top-level domains that are used to host the JavaScript code and trigger the redirections following a multi-stage fingerprinting process engineered to collect network and browser information. "The HelloTDS infrastructure behind fake CAPTCHA campaigns demonstrates how attackers continue to refine their methods to bypass traditional protections, evade detection, and selectively target victims," the researchers said. "By leveraging sophisticated fingerprinting, dynamic domain infrastructure, and deception tacticsthese campaigns achieve both stealth and scale." Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE     #over #websites #infected #with #jsfiretruck
    THEHACKERNEWS.COM
    Over 269,000 Websites Infected with JSFireTruck JavaScript Malware in One Month
    Jun 13, 2025Ravie LakshmananWeb Security / Network Security Cybersecurity researchers are calling attention to a "large-scale campaign" that has been observed compromising legitimate websites with malicious JavaScript injections. According to Palo Alto Networks Unit 42, these malicious injects are obfuscated using JSFuck, which refers to an "esoteric and educational programming style" that uses only a limited set of characters to write and execute code. The cybersecurity company has given the technique an alternate name JSFireTruck owing to the profanity involved. "Multiple websites have been identified with injected malicious JavaScript that uses JSFireTruck obfuscation, which is composed primarily of the symbols [, ], +, $, {, and }," security researchers Hardik Shah, Brad Duncan, and Pranay Kumar Chhaparwal said. "The code's obfuscation hides its true purpose, hindering analysis." Further analysis has determined that the injected code is designed to check the website referrer ("document.referrer"), which identifies the address of the web page from which a request originated. Should the referrer be a search engine such as Google, Bing, DuckDuckGo, Yahoo!, or AOL, the JavaScript code redirects victims to malicious URLs that can deliver malware, exploits, traffic monetization, and malvertising. Unit 42 said its telemetry uncovered 269,552 web pages that have been infected with JavaScript code using the JSFireTruck technique between March 26 and April 25, 2025. A spike in the campaign was first recorded on April 12, when over 50,000 infected web pages were observed in a single day. "The campaign's scale and stealth pose a significant threat," the researchers said. "The widespread nature of these infections suggests a coordinated effort to compromise legitimate websites as attack vectors for further malicious activities." Say Hello to HelloTDS The development comes as Gen Digital took the wraps off a sophisticated Traffic Distribution Service (TDS) called HelloTDS that's designed to conditionally redirect site visitors to fake CAPTCHA pages, tech support scams, fake browser updates, unwanted browser extensions, and cryptocurrency scams through remotely-hosted JavaScript code injected into the sites. The primary objective of the TDS is to act as a gateway, determining the exact nature of content to be delivered to the victims after fingerprinting their devices. If the user is not deemed a suitable target, the victim is redirected to a benign web page. "The campaign entry points are infected or otherwise attacker-controlled streaming websites, file sharing services, as well as malvertising campaigns," researchers Vojtěch Krejsa and Milan Špinka said in a report published this month. "Victims are evaluated based on geolocation, IP address, and browser fingerprinting; for example, connections through VPNs or headless browsers are detected and rejected." Some of these attack chains have been found to serve bogus CAPTCHA pages that leverage the ClickFix strategy to trick users into running malicious code and infecting their machines with a malware known as PEAKLIGHT (aka Emmenhtal Loader), which is known to server information stealers like Lumma. Central to the HelloTDS infrastructure is the use of .top, .shop, and .com top-level domains that are used to host the JavaScript code and trigger the redirections following a multi-stage fingerprinting process engineered to collect network and browser information. "The HelloTDS infrastructure behind fake CAPTCHA campaigns demonstrates how attackers continue to refine their methods to bypass traditional protections, evade detection, and selectively target victims," the researchers said. "By leveraging sophisticated fingerprinting, dynamic domain infrastructure, and deception tactics (such as mimicking legitimate websites and serving benign content to researchers) these campaigns achieve both stealth and scale." Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE    
    0 Commentarii 0 Distribuiri
  • OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs

    The Inefficiency of Static Chain-of-Thought Reasoning in LRMs
    Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However, many simple tasks they handle could be solved by smaller models with fewer tokens, making such elaborate reasoning unnecessary. This echoes human thinking, where we use fast, intuitive responses for easy problems and slower, analytical thinking for complex ones. While LRMs mimic slow, logical reasoning, they generate significantly longer outputs, thereby increasing computational cost. Current methods for reducing reasoning steps lack flexibility, limiting models to a single fixed reasoning style. There is a growing need for adaptive reasoning that adjusts effort according to task difficulty. 
    Limitations of Existing Training-Based and Training-Free Approaches
    Recent research on improving reasoning efficiency in LRMs can be categorized into two main areas: training-based and training-free methods. Training strategies often use reinforcement learning or fine-tuning to limit token usage or adjust reasoning depth, but they tend to follow fixed patterns without flexibility. Training-free approaches utilize prompt engineering or pattern detection to shorten outputs during inference; however, they also lack adaptability. More recent work focuses on variable-length reasoning, where models adjust reasoning depth based on task complexity. Others study “overthinking,” where models over-reason unnecessarily. However, few methods enable dynamic switching between quick and thorough reasoning—something this paper addresses directly. 
    Introducing OThink-R1: Dynamic Fast/Slow Reasoning Framework
    Researchers from Zhejiang University and OPPO have developed OThink-R1, a new approach that enables LRMs to switch between fast and slow thinking smartly, much like humans do. By analyzing reasoning patterns, they identified which steps are essential and which are redundant. With help from another model acting as a judge, they trained LRMs to adapt their reasoning style based on task complexity. Their method reduces unnecessary reasoning by over 23% without losing accuracy. Using a loss function and fine-tuned datasets, OThink-R1 outperforms previous models in both efficiency and performance on various math and question-answering tasks. 
    System Architecture: Reasoning Pruning and Dual-Reference Optimization
    The OThink-R1 framework helps LRMs dynamically switch between fast and slow thinking. First, it identifies when LRMs include unnecessary reasoning, like overexplaining or double-checking, versus when detailed steps are truly essential. Using this, it builds a curated training dataset by pruning redundant reasoning and retaining valuable logic. Then, during fine-tuning, a special loss function balances both reasoning styles. This dual-reference loss compares the model’s outputs with both fast and slow thinking variants, encouraging flexibility. As a result, OThink-R1 can adaptively choose the most efficient reasoning path for each problem while preserving accuracy and logical depth. 

    Empirical Evaluation and Comparative Performance
    The OThink-R1 model was tested on simpler QA and math tasks to evaluate its ability to switch between fast and slow reasoning. Using datasets like OpenBookQA, CommonsenseQA, ASDIV, and GSM8K, the model demonstrated strong performance, generating fewer tokens while maintaining or improving accuracy. Compared to baselines such as NoThinking and DualFormer, OThink-R1 demonstrated a better balance between efficiency and effectiveness. Ablation studies confirmed the importance of pruning, KL constraints, and LLM-Judge in achieving optimal results. A case study illustrated that unnecessary reasoning can lead to overthinking and reduced accuracy, highlighting OThink-R1’s strength in adaptive reasoning. 

    Conclusion: Towards Scalable and Efficient Hybrid Reasoning Systems
    In conclusion, OThink-R1 is a large reasoning model that adaptively switches between fast and slow thinking modes to improve both efficiency and performance. It addresses the issue of unnecessarily complex reasoning in large models by analyzing and classifying reasoning steps as either essential or redundant. By pruning the redundant ones while maintaining logical accuracy, OThink-R1 reduces unnecessary computation. It also introduces a dual-reference KL-divergence loss to strengthen hybrid reasoning. Tested on math and QA tasks, it cuts down reasoning redundancy by 23% without sacrificing accuracy, showing promise for building more adaptive, scalable, and efficient AI reasoning systems in the future. 

    Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDevSana Hassanhttps://www.marktechpost.com/author/sana-hassan/MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty AssessmentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Run Multiple AI Coding Agents in Parallel with Container-Use from Dagger
    #othinkr1 #dualmode #reasoning #framework #cut
    OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs
    The Inefficiency of Static Chain-of-Thought Reasoning in LRMs Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However, many simple tasks they handle could be solved by smaller models with fewer tokens, making such elaborate reasoning unnecessary. This echoes human thinking, where we use fast, intuitive responses for easy problems and slower, analytical thinking for complex ones. While LRMs mimic slow, logical reasoning, they generate significantly longer outputs, thereby increasing computational cost. Current methods for reducing reasoning steps lack flexibility, limiting models to a single fixed reasoning style. There is a growing need for adaptive reasoning that adjusts effort according to task difficulty.  Limitations of Existing Training-Based and Training-Free Approaches Recent research on improving reasoning efficiency in LRMs can be categorized into two main areas: training-based and training-free methods. Training strategies often use reinforcement learning or fine-tuning to limit token usage or adjust reasoning depth, but they tend to follow fixed patterns without flexibility. Training-free approaches utilize prompt engineering or pattern detection to shorten outputs during inference; however, they also lack adaptability. More recent work focuses on variable-length reasoning, where models adjust reasoning depth based on task complexity. Others study “overthinking,” where models over-reason unnecessarily. However, few methods enable dynamic switching between quick and thorough reasoning—something this paper addresses directly.  Introducing OThink-R1: Dynamic Fast/Slow Reasoning Framework Researchers from Zhejiang University and OPPO have developed OThink-R1, a new approach that enables LRMs to switch between fast and slow thinking smartly, much like humans do. By analyzing reasoning patterns, they identified which steps are essential and which are redundant. With help from another model acting as a judge, they trained LRMs to adapt their reasoning style based on task complexity. Their method reduces unnecessary reasoning by over 23% without losing accuracy. Using a loss function and fine-tuned datasets, OThink-R1 outperforms previous models in both efficiency and performance on various math and question-answering tasks.  System Architecture: Reasoning Pruning and Dual-Reference Optimization The OThink-R1 framework helps LRMs dynamically switch between fast and slow thinking. First, it identifies when LRMs include unnecessary reasoning, like overexplaining or double-checking, versus when detailed steps are truly essential. Using this, it builds a curated training dataset by pruning redundant reasoning and retaining valuable logic. Then, during fine-tuning, a special loss function balances both reasoning styles. This dual-reference loss compares the model’s outputs with both fast and slow thinking variants, encouraging flexibility. As a result, OThink-R1 can adaptively choose the most efficient reasoning path for each problem while preserving accuracy and logical depth.  Empirical Evaluation and Comparative Performance The OThink-R1 model was tested on simpler QA and math tasks to evaluate its ability to switch between fast and slow reasoning. Using datasets like OpenBookQA, CommonsenseQA, ASDIV, and GSM8K, the model demonstrated strong performance, generating fewer tokens while maintaining or improving accuracy. Compared to baselines such as NoThinking and DualFormer, OThink-R1 demonstrated a better balance between efficiency and effectiveness. Ablation studies confirmed the importance of pruning, KL constraints, and LLM-Judge in achieving optimal results. A case study illustrated that unnecessary reasoning can lead to overthinking and reduced accuracy, highlighting OThink-R1’s strength in adaptive reasoning.  Conclusion: Towards Scalable and Efficient Hybrid Reasoning Systems In conclusion, OThink-R1 is a large reasoning model that adaptively switches between fast and slow thinking modes to improve both efficiency and performance. It addresses the issue of unnecessarily complex reasoning in large models by analyzing and classifying reasoning steps as either essential or redundant. By pruning the redundant ones while maintaining logical accuracy, OThink-R1 reduces unnecessary computation. It also introduces a dual-reference KL-divergence loss to strengthen hybrid reasoning. Tested on math and QA tasks, it cuts down reasoning redundancy by 23% without sacrificing accuracy, showing promise for building more adaptive, scalable, and efficient AI reasoning systems in the future.  Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDevSana Hassanhttps://www.marktechpost.com/author/sana-hassan/MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty AssessmentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Run Multiple AI Coding Agents in Parallel with Container-Use from Dagger #othinkr1 #dualmode #reasoning #framework #cut
    WWW.MARKTECHPOST.COM
    OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs
    The Inefficiency of Static Chain-of-Thought Reasoning in LRMs Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However, many simple tasks they handle could be solved by smaller models with fewer tokens, making such elaborate reasoning unnecessary. This echoes human thinking, where we use fast, intuitive responses for easy problems and slower, analytical thinking for complex ones. While LRMs mimic slow, logical reasoning, they generate significantly longer outputs, thereby increasing computational cost. Current methods for reducing reasoning steps lack flexibility, limiting models to a single fixed reasoning style. There is a growing need for adaptive reasoning that adjusts effort according to task difficulty.  Limitations of Existing Training-Based and Training-Free Approaches Recent research on improving reasoning efficiency in LRMs can be categorized into two main areas: training-based and training-free methods. Training strategies often use reinforcement learning or fine-tuning to limit token usage or adjust reasoning depth, but they tend to follow fixed patterns without flexibility. Training-free approaches utilize prompt engineering or pattern detection to shorten outputs during inference; however, they also lack adaptability. More recent work focuses on variable-length reasoning, where models adjust reasoning depth based on task complexity. Others study “overthinking,” where models over-reason unnecessarily. However, few methods enable dynamic switching between quick and thorough reasoning—something this paper addresses directly.  Introducing OThink-R1: Dynamic Fast/Slow Reasoning Framework Researchers from Zhejiang University and OPPO have developed OThink-R1, a new approach that enables LRMs to switch between fast and slow thinking smartly, much like humans do. By analyzing reasoning patterns, they identified which steps are essential and which are redundant. With help from another model acting as a judge, they trained LRMs to adapt their reasoning style based on task complexity. Their method reduces unnecessary reasoning by over 23% without losing accuracy. Using a loss function and fine-tuned datasets, OThink-R1 outperforms previous models in both efficiency and performance on various math and question-answering tasks.  System Architecture: Reasoning Pruning and Dual-Reference Optimization The OThink-R1 framework helps LRMs dynamically switch between fast and slow thinking. First, it identifies when LRMs include unnecessary reasoning, like overexplaining or double-checking, versus when detailed steps are truly essential. Using this, it builds a curated training dataset by pruning redundant reasoning and retaining valuable logic. Then, during fine-tuning, a special loss function balances both reasoning styles. This dual-reference loss compares the model’s outputs with both fast and slow thinking variants, encouraging flexibility. As a result, OThink-R1 can adaptively choose the most efficient reasoning path for each problem while preserving accuracy and logical depth.  Empirical Evaluation and Comparative Performance The OThink-R1 model was tested on simpler QA and math tasks to evaluate its ability to switch between fast and slow reasoning. Using datasets like OpenBookQA, CommonsenseQA, ASDIV, and GSM8K, the model demonstrated strong performance, generating fewer tokens while maintaining or improving accuracy. Compared to baselines such as NoThinking and DualFormer, OThink-R1 demonstrated a better balance between efficiency and effectiveness. Ablation studies confirmed the importance of pruning, KL constraints, and LLM-Judge in achieving optimal results. A case study illustrated that unnecessary reasoning can lead to overthinking and reduced accuracy, highlighting OThink-R1’s strength in adaptive reasoning.  Conclusion: Towards Scalable and Efficient Hybrid Reasoning Systems In conclusion, OThink-R1 is a large reasoning model that adaptively switches between fast and slow thinking modes to improve both efficiency and performance. It addresses the issue of unnecessarily complex reasoning in large models by analyzing and classifying reasoning steps as either essential or redundant. By pruning the redundant ones while maintaining logical accuracy, OThink-R1 reduces unnecessary computation. It also introduces a dual-reference KL-divergence loss to strengthen hybrid reasoning. Tested on math and QA tasks, it cuts down reasoning redundancy by 23% without sacrificing accuracy, showing promise for building more adaptive, scalable, and efficient AI reasoning systems in the future.  Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDevSana Hassanhttps://www.marktechpost.com/author/sana-hassan/MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty AssessmentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Run Multiple AI Coding Agents in Parallel with Container-Use from Dagger
    0 Commentarii 0 Distribuiri
CGShares https://cgshares.com