• TECHCRUNCH.COM
    AIs coming to the classroom: Brisk raises $15M after a quick start in school
    Its virtually impossible today to determine when a students writing has been composed using ChatGPT or another GenAI tool, and it can be a nightmare to disprove incorrect accusations. An AI edtech startup called Brisk has built a tool that could at least help teachers identify some of the telltale signs, and its now announcing $15 million in new funding on the back of decent traction.Alongside a writing inspector, Brisks platform offers around 40 tools for teachers and students to use by way of a Chrome extension. The platform uses generative AI, computer vision and other AI features that Brisk says can help not only speed up work, but do the work better. These include writing lesson plans, tests and presentations; adjusting work for different abilities; grading work, and more.The existing edtech stack as we know it, which is around140 different tools that the average teacher in the U.S. uses in a given school year, is not ready for AI, Brisks CEO and founder Arman Jaffer said in an interview. Were trying to build the AI-native edtech stack.The funding will be used in part to build more tools, and in part to expand to more platforms. A Microsoft integration, aimed at the many schools that are Microsoft shops, is planned for autumn 2025.Business has so far been brisk for San Francisco-based Brisk. Since it raised a seed round of $5 million in September 2024, its user base has grown five-fold, and Jaffer said the company had 40xd its revenue in 2024 (its worth noting that the company was starting from zero). Brisk says more than 2,000 schools in 100 countries use its products today, and more than 90% of its business comes from inbound interest. One in five K-12 teachers in the U.S. have installed the Brisk Extension as of February 2025, Jaffer added.Bessemer Venture Partners is leading the round, with previous backers Owl Ventures, South Park Commons, and Springbank Collective also participating.Brisks funding and growth come at a time when technology and education are becoming increasingly intertwined.Educators have spent years embracing an increasing array of technology to improve how they work as well as to offset other major changes in their tools (such as the decline of textbooks) and other areas, such as budget cuts. (The recent DoE changes in the U.S. have yet to play out, but it has raised concerns that they will spell yet more erosion of resources.)Enter tech, where adoption is easy, in a sense. There are literally hundreds of startups and much larger technology giants rolling out edtech apps. Some outfits cater directly to students and families, like the immense Khan Academy empire, while others direct themselves at schools and educators such as the suites developed by Google and Microsoft.And, just as enterprises have embraced consumerization in their IT departments looking for apps that have the same usability as the most popular consumer apps so have teachers looking for inroads to connect with students. Kahoot is a key example of how education has been gamified, the theory being this is one way of making learning more accessible.AI is yet another step in edtechs natural evolution. AI companies are building learning tools to that end, and their basic pitch is much like Brisks: AI is coming whether you like it or not, and it will make everyones lives better.But as with other segments of the world of work and play, not all AI moves are received with open arms. OpenAIs teachers guide to ChatGPT released in November 2024, arguably well after the horse had bolted was met with criticism over the bigger issues that it failed to address around accuracy and data protection.Jaffer founded Brisk after spending time in edtech in a different capacity. He spent more than five years at the Chan Zuckerberg Initiative, where he conceived of and led a team building Notebooks, a Google Docs alternative aimed at improving collaboration between students and teachers. Ultimately, Notebooks did not take off, not least because, well, Google Docs does the job, but also because AI really changes the game for collaboration. That ethos was carried into Jaffers next swing of the founder bat.If using AI rings alarm bells, Brisk wants to muffle that with a measured approach: assistance, not replacement.The companys student writing inspector does not conclude this was written by ChatGPT. It starts with a video of a students work process on-screen, which it then watches in fast-motion, flagging when that student has copy/pasted information or is otherwise doing other things that are uncharacteristic of how they work. This is then sent on to the teacher who can assess whether it could be an indication of copying from somewhere else, or if the work was indeed created by GenAI.The most popular tool in the stack, Targeted Feedback, uses generative AI to read student essays (on Google Docs) and create comments that are tailored to age, a grading rubric or other standards if theyve been uploaded or selected. Before anything is shared with students, teachers can review and edit the comments (in the best-case scenario, they are doing that rather than just shifting them along with no oversight).Whether the idea of AI taking on some of teachers work, and maybe even doing it better, is loved or feared in the world of education, it seems that the trend line is too clear to be ignored, said Kent Bennett, the Bessemer partner who led this investment.Were big believers at this AI moment in tracking sectors like education technology, which have a reputation for being tech phobic. This reputation often arises because the high-value workflow in these environments involves human language, and thus wasnt as addressable with legacy software with LLMs all of that can change, he told TechCrunch in an email exchange. [But] one of the biggest surprises as we looked into AI powered ed-tech was that educators were not just tolerating AI, they are aggressively seeking it out, he said, adding that it is obvious that teachers cannot be cut out of the equation altogether.Looking forward, Brisk will be building more immersive tools beyond its extensions. Later this year, it will be switching on a new web platform so that educators can work cohesively and natively within the Brisk environment. It will include new resources and activities, Jaffer said.Brisk also wants to offer more multimodal integrations. These will include the ability for students to submit image-based work, in addition to text, for evaluations; and a podcast feature to generate audio versions to describe documents and more.
    0 Comments 0 Shares
  • 3DPRINTINGINDUSTRY.COM
    Eplus3Ds Red-Laser Technology Overcomes Copper 3D Printing Barriers at TCT Asia 2025
    At TCT Asia 2025, Chinese metal 3D printer manufacturer Eplus3D unveiled a significant advancement in metal additive manufacturing: the successful 3D printing of pure copper and copper alloys using red-laser technology. The showcased examples of meter-scale copper alloy parts demonstrate how the company addresses long-standing challenges associated with coppers high reflectivity and thermal conductivity, which have historically made it difficult to process using laser-based additive manufacturing techniques.The ability to produce stable, high-performance meter-scale copper parts with long-cycle reliability, all without the need for major hardware modifications, is expected to benefit industries such as aerospace, automotive, and electronics.For those who missed TCT Asia 2025, Eplus3D will also be present at AMUG 2025 in Chicago, Illinois, from March 30 to April 3. Visitors can explore the companys latest innovations in metal additive manufacturing at Booth P-14.Copper alloy parts from Eplus3D. Photo via: Eplus3DOvercoming the Challenges of Copper 3D PrintingEplus3D explained that coppers low absorption of traditional laser wavelengths has historically led to defects such as incomplete melting, voids, cracks, and inconsistent layer bonding. Additionally, its high thermal conductivity accelerates heat dissipation, increasing thermal stress and the risk of part failure. Eplus3Ds latest approach aims to address these challenges.A key demonstration at TCT Asia 2025 was the 1030175 mm CuCrZr impeller, produced on the EP-M1250 system. According to the company, this component achieved 99.97% density while preserving coppers superior thermal propertiesan essential factor for applications such as aerospace thermal management.Eplus3D copper impeller. Photo: Eplus3DEplus3D also highlighted its expertise in multi-laser Powder Bed Fusion (PBF) systems, including the EP-M2050, EP-M1550, and EP-M1250. The company emphasized that these advancements contribute to ongoing industry efforts to enhance the scalability and reliability of metal 3D printing.Advancements in Copper 3D PrintingCopper 3D printing is gaining traction in the AM due to its ability to offer greater geometric flexibility, reduced material waste, and cost savings for low-volume production.In response to the growing demand for 3D printed GRCop-42 copper alloy in space applications, Nikon SLM Solutions developed new material parameters for NASAs GRCop-42. This pre-configured solution aims to improve powder availability and optimize the material for SLM 3D printers.Designed for scalability, these parameters were tailored for large-format 3D printers such as the NXG XII 600. Nikon SLM states that they enable a 99.97% density while ensuring consistent properties across both single- and multi-laser overlap regions within the printers build area.In a strategic collaboration, Tucker Induction Systems, an induction heating firm, has partnered with Nikon SLM Solutions to introduce copper 3D printing services in the United States. According to Nikon SLM, this new capability enhances Tucker Induction Systems production efficiency while enabling the creation of complex, high-performance designs.Rocky Tucker, Owner of Tucker Induction Systems, highlighted the impact of adopting the SLM 280 PS, stating that it has allowed the company to develop functional copper inductors and drive innovation. He credited Nikon SLMs technology and collaborative approach as key factors in their success.Who won the 2024 3D Printing Industry Awards?Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image showsEplus3D copper impeller. Photo: Eplus3DPaloma DuranPaloma Duran holds a BA in International Relations and an MA in Journalism. Specializing in writing, podcasting, and content and event creation, she works across politics, energy, mining, and technology. With a passion for global trends, Paloma is particularly interested in the impact of technology like 3D printing on shaping our future.
    0 Comments 0 Shares
  • WWW.ARCHITECTURAL-REVIEW.COM
    Join the AR in Milan on 9 April to hear from winner of AR Future Projects Gort Scott
    Register for free to join AR editors and AR Future Projects winners at Dropcity during the Salone del MobileWinner of AR Future Projects Gort Scott will present their winning project, New Court for Girton College at University of Cambridge, on Wednesday 9 April at Dropcity in Milan, during the Salone del Mobile.AR Future Projects Live 2025Wednesday 9 April, 6pmDropcity, 60 Via Giovanni Battista Sammartini, 20125 MilanoRegister hereto attend for freeNew Court for Girton College at University of Cambridge by Gort ScottWe will also hear from Malta-based practice AP Valletta as well as Guillaume Othenin-Girard at the University of Hong Kong about their highly commended projects: the restoration of a school in Ghana and an archaeological field laboratory in Armenia.Osu Salem Presbyterian School in Accra, Ghana, designed by AP Valletta and David Kojo DerbanGlkhatun archaeological field laboratory in Urtsadzor, Armenia, designed by Guillaume Othenin-Girard and the University of Hong KongLaunched in 2002, the AR Future Projects awards are a window into tomorrows cities. Spanning 12 categories, they celebrate excellence in unbuilt and incomplete projects, and the potential for positive contribution to communities, neighbourhoods and urban landscapes around the world. Find out more about this years winners here.All winners have been published in the 2025 Future Projects awards catalogue purchase a copy here2025-03-26AR EditorsShare
    0 Comments 0 Shares
  • WWW.COMPUTERWEEKLY.COM
    Podcast: AI data needs scalable flash, but also needs to be FAIR
    Podcast: AI data needs scalable flash, but also needs to be FAIR
    0 Comments 0 Shares
  • WWW.COMPUTERWEEKLY.COM
    Military AI caught in tension between speed and control
    Military planners and industry figures say artificial intelligence (AI) can unlock back-office efficiency for the UKs armed forces and help commanders make faster, better-informed decisions, but intractable problems baked into the technology could further reduce military accountability. Speaking on a panel about the ethics of using autonomous technologies in warfare at the Alan Turing Institute-hosted AI UK event in mid-March, industry figures and a retired senior British Army officerThey argued that proliferating AI throughout UK defence will deter future conflict, free up resources, improve various decision-making processes including military planning and target selection and stop the country from irreversibly falling behind its adversaries.While these speakers did highlight the importance of ensuring meaningful human oversight of military AI, and the need for global regulation to limit the proliferation of uncontrollable AI systems in this context, Elke Schwarz, a professor of political theory at Queen Mary University London and author of Death machines: The ethics of violent technologies, argued there is a clear tension between autonomy and control that is baked into the technology.She added this intractable problem with AI means there is a real risk that humans are taken further out of the military decision-making loop, in turn reducing accountability and lowering the threshold for resorting to violence.Major general Rupert Jones, for example, argued that greater use of AI can help UK defence better navigate the muddy context of modern warfare, which is characterised by less well-defined enemies and proxy conflicts.Warfares got more complicated. Victory and success are harder to define, he said, adding the highest potential use of AI is in how it can help commanders make the best possible decisions in the least time.To those who are not familiar with defence, it really is a race youre racing your adversary to make better, quicker decisions than they can. If they make faster decisions at you, even if theyre not perfect positions, they will probably be able to gain the momentum over you. With decision-making, you would need to have enormously robust, reliable and always up-to-date data to replace the capabilities and cognitive capacities of a human decision-maker Elke Schwarz, Queen Mary University LondonOn top of the technologys potential to enhance decision-making, Jones said the hugely expensive nature of running defence organisations means AI can also be used to boost back-office efficiency, which in turn would unlock more funds for use on front-line capabilities.AI gives you huge efficiency, takes humans out of the loop, frees up money and one thing we need in UK defence right now is to free up some money so we can modernise the front end, he said.However, he noted that the potential of the technology to enhance decision-making and unlock back-office efficiencies would rest on the ability of UK defence to improve its underlying data practices so that the vast amounts of information it holds can be effectively exploited by AI.Jones added that UK defence organisations should begin deploying in the back office first to build up their confidence in using the technology, before moving on to more complex use cases like autonomous weapons and other AI-powered front-line systems: Build an AI baseline you can grow from.While Schwarz agreed that AI will be most useful to the military for back-office tasks, she took the view this is because the technology is simply not good enough for lethal use cases, and that the use of AI in decision-making will muddy the waters further.With decision-making, for example, you would need to have enormously robust, reliable and always up-to-date data to replace the capabilities and cognitive capacities of a human decision-maker, she said, adding the dynamics inherent in the technology create a clear tension between speed and control.On one hand, we say, Well, we need to have meaningful human control at all points of using these systems, but ultimately the raison dtre for these systems is to take the human further out of the loop, so theres always tension, said Schwarz.The reason the human is taken further out of the loop is because the logic of the system doesnt cohere that well with the cognitive logic of how we, as humans, process data.Elke added that on top of the obvious tension between cognition speed and meaningful human control, there is also the problem of automation bias, whereby humans are more likely to trust computer outputs because there is a misplaced sense the results are inherently objective.We are more likely to trust the machine decision that we have less time to overrule, where we cannot create a full mental picture in time to make a human decision as we are further embedded into digital systems, those are the kinds of tensions that I dont see going away anytime soon. Theyre intractable problems, she said.That takes us to ethics and the question of, what do we do with ethical decisions when the human is taken out?While Schwarz urged extreme caution, Henry Gates, associate director at AI defence startup Helsing, said there is a pressing need to move fast with the development of military AI so that the UK does not fall behind other nefarious actors and is able to have a greater say over how autonomous military systems are regulated.If we are just a country that doesnt have any of these weapons people arent really going to listen to us, he said, adding that moving at pace with military AI can also help build an alternative deterrence.In the same way we have nuclear weapons as a deterrence to nuclear war, AI potentially provides a route towards conventional deterrence that reduces armed conflict.Schwarz, however, warned against putting all our eggs in the AI basket to deter war, arguing there needs to be greater investment in human capabilities for dialogue, trust and diplomacy.She also warned that instead of acting as a deterrent, AIs socio-technical nature whereby the technical components of a given system are informed by social processes and vice versa means it can negatively shape humans perspectives of one another, leading to dehumanisation.Ultimately, it has always been the case [with] technologies that the more we come to rely on them, the more they shape our perspectives about us, and about others as well, she said, adding this is certainly the case with AI as, unlike other tools of war, like tanks or guns that are used as physical prosthetics, the technology acts as a cognitive prosthetic.What is the logic of all of that? Well, an AI system sees other humans as objects, necessarily edges and traces so implicit then is an objectification, which is problematic if we want to establish relationships.On the issue of meaningful human control, Gates added there are three things to consider: the extent to which decision-making is delegated to AI, performance monitoring to ensure models do not drift from their purpose, and keeping humans in full control of how AI systems are being developed. In the same way we have nuclear weapons as a deterrence to nuclear war, AI potentially provides a route towards conventional deterrence that reduces armed conflict Henry Gates, HelsingHowever, Keith Dear, managing director of Fujitsus Centre for Cognitive and Advanced Technologies, argued that the capabilities of AI have come so far in such a short space of time that it will soon be able to outperform humans on how to apply the laws of war to its decisions.For a target to be justified under the law of armed conflict, it has to be positively identified, has to be necessary has to be proportionate, it has to be humane, so no uncontrolled effects, and it has to be lawful. All of those things are tests that you could apply to an AI in the same way that we apply them to a soldier, sailor or an airman serving on the front line, he said.When you delegate authority, it has to outperform us on those things, and if it does outperform us in those roles where you can baseline and benchmark that, it becomes unethical not to delegate authority to the machine, which has a lower false negative in making those decisions than us.Highlighting how the speed of modern stock trading means it is largely left to computers, Dear added AI will create a similar situation in warfare in that, because it will have eclipsed the speed of human cognition, decision-making can and should be left to these autonomous systems.Its an AI watching the AI. You may have humans before the loop, but the idea that, as warfare speeds up and we get to AGI [artificial general intelligence], therell be someone in the loop is perverse I think its a choice to lose, he said.Commenting on the idea that AI will reduce human suffering in conflict and create a future where wars are fought between armies of drones, Gates added it was unlikely, noting that while it may change the character of war, it does not change the underlying logic, which is how one group can impose its will on another.Jones agreed, noting that whether or not an AI is sat in the middle, the idea is to hurt the people on the other side. You are still trying to influence populations, political decision-makers, militaries, he said.For Dear, there will be no role for humans on the battlefield. When your machines finish fighting and one side has won, itll be no different to having a human army that won on the battlefield the point then is that [either way] you have no choice but to surrender or face a war of extermination, he said.Schwarz, however, highlighted the reality that many of todays AI systems are simply not very good yet, and warned against making wildly optimistic claims about the revolutionary impacts of the technology in every aspect of life, including warfare. It is not a panacea for absolutely everything, she said.Read more about military technologyGoogle drops pledge not to develop AI weapons: Google has dropped an ethical pledge to not develop artificial intelligence systems that can be used in weapon or surveillance systems.Government insists it is acting responsibly on military AI: The government has responded to calls from a Lords committee that it must proceed with caution when it comes to autonomous weapons and military artificial intelligence, arguing that caution is already embedded throughout its approach.UK Defence Committee urges MoD to embrace AI: Defence Committee outlines changes it thinks the Ministry of Defence should make to realise the battlefield advantages of artificial intelligence.
    0 Comments 0 Shares
  • WWW.ZDNET.COM
    This Eufy robot vacuum with a built-in handheld unit is a steal at $200 off
    Eufy features the cheapest handheld and robot vacuum combination this year, with a handheld unit built into the robot's body instead of the dock - and it's on sale.
    0 Comments 0 Shares
  • WWW.ZDNET.COM
    Linux kernel 6.14 is a big leap forward in performance and Windows compatibility
    The new release is finally here with cutting-edge features that should please gamers.
    0 Comments 0 Shares
  • WWW.FORBES.COM
    3 Ways The Slippery Slope Fallacy Hurts Couples By A Psychologist
    Are you seeing signs of trouble where there are none, letting one doubt snowball into a disaster? ... More Heres how the slippery slope fallacy might be affecting your relationships.gettyRelationships are all-consuming, and for some people, even the smallest moments can set off a spiral of worry. A delayed text, a change in tone or a quiet partner at dinner is all it takes for fear to take over. The assumption is not just that something is wrong, but that one minor issue will inevitably lead to a chain of much worse events.This is the slippery slope fallacy when a small event is believed to trigger an escalating sequence of negative consequences, even when there is no real basis for it. However, not all slippery slope arguments are fallacious. A recovering addict might reason, If I have one drink, Ill want another, and eventually, Ill relapse. In this case, past experience supports the concern.In relationships though, this mental trap can create unnecessary distress, especially when its based solely on fear and assumption rather than evidence.For instance, you might have thoughts like, If they cancel plans today, soon they wont make time for me at all, and eventually, theyll stop loving me. Recognizing which fears are unfounded can prevent avoidable conflict and build healthier and more trusting relationships.Here are three ways the slippery slope fallacy might be hurting your relationship and how to deal with it.1. Slippery Slope Thinking Triggers Unnecessary ConflictWhen one partner assumes the worst without concrete evidence, small misunderstandings can quickly spiral into full-blown conflicts. Instead of addressing an issue calmly, if you react through fear, it can lead to unnecessary arguments.For example, if your partner doesnt respond to a message immediately, you might assume your partner is losing interest. Reacting with anger or withdrawal due to this assumption can lead to a cycle of misunderstandings. In reality, its possible your partner may have simply been busy. Over time, repeated instances of this kind of thinking can create tension, making the relationship feel unstable and exhausting.A study published this month in Scientific Reports shows that people with lower levels of mindfulness, especially men, are more likely to overestimate their partners negative emotions assuming the worst even when theres no real evidence.Researchers suggest that mindfulness helps reduce this tendency by allowing individuals to see their partners emotions more accurately, rather than exaggerating negativity or reacting impulsively.A mindful person would be less likely to jump to conclusions and more likely to consider other possibilities (like their partner being busy), preventing unnecessary conflict.To create a healthy relationship dynamic, its important to pause and assess situations objectively rather than reacting impulsively. Developing this awareness encourages a deeper understanding of your partners emotions, ultimately strengthening the relationships security and stability.A healthy relationship thrives not on certainty but on trust, where partners choose to interpret each others actions with curiosity rather than catastrophe.2. Slippery Slope Thinking Fuels A Fear Of ChangePeople with strong slippery slope beliefs tend to assume that one small change will lead to inevitable, uncontrollable negative consequences. You might find yourself thinking, If my partner is changing now, it means theyll become a completely different person, and our relationship will fall apart.Research published in the Journal of Social and Personal Relationships found when one partner expected change but the other didnt, relationship quality suffered.People who engage in slippery slope thinking are more likely to struggle in relationships when their partner changes. Instead of seeing growth as natural, they fear it will have disastrous consequences. Learning to reframe change as neutral or even positive rather than assuming it will harm the relationship can help maintain emotional safety and stability.For instance, a person might think, IfWhen you catch yourself assuming the worst, ask yourself, Is there real evidence for this, or am I jumping to conclusions? Replace catastrophic thoughts with more balanced ones, like, My partner working out doesnt mean theyll leave me. Its just something they enjoy and makes them happy.If youre feeling anxious about change, talk to your partner instead of assuming the worst. Express concerns without blaming Ive been feeling uneasy about this change. Can we talk about it?Instead of viewing change as a threat, view it as an opportunity for both partners to evolve. Remind yourself that healthy relationships allow space for individual growth without sacrificing emotional security.3. Slippery Slope Thinking Breeds Insecurity And ControlWhen fear-driven assumptions take over, they can lead to a heightened sense of insecurity and a need to control the relationship. If you believe that small issues will escalate into worst-case scenarios, you may feel the urge to micromanage your partners actions or seek constant reassurance.For example, if your partner makes a new friend, you might think, Theyll enjoy their company more than mine, start spending less time with me and eventually leave. This fear-based thinking can result in behaviors like checking their phone, needing frequent validation or setting unnecessary restrictions all of which can strain the relationship.A 2017 study published in Current Opinion in Psychology highlights that heightened sensitivity to threats in the relationship can lead you to seek excessive reassurance, ruminate over worst-case scenarios and misinterpret your partners actions as signs of rejection. This leads to micromanaging or controlling the relationship out of fear, as anxious individuals become preoccupied with maintaining closeness and reducing perceived threats. However, these behaviors can weigh on your relationship by overwhelming your partner and reinforcing a cycle of insecurity.Instead of letting insecurity dictate your actions, try grounding yourself in reality. Ask yourself, Do I have actual evidence that my partner is pulling away, or is this my fear talking?Instead of acting on worst-case scenarios, focus on open communication and self-soothing strategies to manage distress. Ultimately, grounding yourself in reality and addressing concerns with patience and respect can strengthen your relationship rather than strain it.Breaking Free From Slippery Slope ThinkingWhen you challenge irrational fears with curiosity rather than control, you create a foundation of trust rather than tension.A healthy relationship is built on mutual understanding, not constant reassurance. The next time you catch yourself spiraling into worst-case scenarios, pause and ask: Is this fear or fact? Shifting your mindset from catastrophe to calm consideration can help you approach challenges with a clear head and a compassionate heart.If you find that these thought patterns are deeply ingrained and difficult to manage alone, seeking support through therapy, engaging in deeper self-reflection and building on mindfulness practices can help you develop healthier ways of thinking and responding in relationships.Are you curious to know whether your relationship is genuinely thriving? Take this science-backed test to find out: Relationship Flourishing Scale
    0 Comments 0 Shares
  • WWW.FORBES.COM
    Complaining Of Chronic Pain Doesnt Make You A Complainer
    1990s SYMBOLIC HAND CAUGHT IN RAT TRAP (Photo by Camerique/Getty Images)gettyWe all know what a complainer is: its a person who finds the dark side of everything, who turns a casual conversation starterHow are you doing?into a somber soliloquy about all the (usually minor) problems making their life unbearable.Too often, people with chronic pain are viewed as complainers by friends, family, and even their clinicians. That view misunderstands the neuroscience of most chronic pain. Consider a now-classic study, involving brain scans and fingers crushed by hydraulic pistons.A volunteer sits in a research lab with a rubber probe resting on her thumbnail. The probe, with a point the thickness of a colored pencil, is attached to a hydraulic piston calibrated to increase pressure on her thumbnail in precise increments, ranging from 0.45 kg/cm2 (approximately equivalent to balancing a Harlequin romance on top of the rubber probe) to 9 kg (the weight of a well-fed dachshund). At small amounts of pressure, she reports being unbothered by the probe. But by the time the piston exerts 1.4kg of pressure on her thumbnail (think: a large bag of rice), she tells the researchers she is feeling pain.It was the early 2000s, and the study was led by Dan Clauw, a rheumatologist at the University of Michigan. Deriving its name from the Greek word Rheuma, which roughly translates as bodily discharges, the specialty typically focuses on disorders like gout and lupus that are characterized by painful joint swellings, sometimes accompanied by viscous discharge. Clauw is a renowned rheumatologist, but not for studying diseases like lupus, nor for concerning himself with bodily discharges. He studies chronic pain.And in that study, he was trying to understand the neuropsychology of fibromyalgia, a disorder in which people experience widespread bodily pain, typically in soft tissues rather than joints, with nothing palpably abnormal on physical examno redness, no swellingother than an exquisite sensitivity to pressure on those tissues.Clauw projects the demeanor of a Scrabble afficionado rather than the that of a mad scientist who crushes peoples thumbnails. Clauw was subjecting participants to pain in order to understand the cause of those pains. In the study, his team measured the pain thresholds of two groups of volunteers: people with fibromyalgia and people with no history of chronic pain. Think of a pain threshold as the amount of a noxious stimuli it takes before a person says that hurts. The stimulus could be hot temperature, cold temperature, electrical shocks, or, as youve probably figured out, hydraulic pressure. In the study, Clauws team confirmed what previous research had established: that it took about twice as much pressure for healthy people to report pain as it did for those with fibromyalgia.An uncharitable interpretation of these findings would conclude that people with fibromyalgia are complainers. They dont feel more pain than anyone else, they just report feeling more pain. The kind of discomfort most people would describe as being 2 out of 10 they describe as being 6 out of 10. By this interpretation, the problem for people with fibromyalgia is less about how much pain they feel and more about how much they complain about things that arent even painful.But there is another possibility. Rather than having lower thresholds for complaining that sensations are painful, people with fibromyalgia might actually experience pain at lower thresholds. This might sound like semantics. I call it pain, you call it a different threshold for reporting pain. But feeling pain and reporting pain arent the same phenomenon. Pain signifies suffering and distress, a feeling that noticeably disrupts people;s ability to function normallyto read a book, enjoy a movie, or spend a restful night in bed. Accompanying this feeling will be bodily changes signifying that they are experiencing an active injurystress hormones surge; pain centers of the brain race into action. That means we need to know not only when someone reports something as being painful but, also, whether the parts of their body that sense and experience pain are active when they make those reports.Clauws study was designed to disentangle these two competing interpretations. To do so, before subjecting peoples thumbnails to near-dachshund levels of pressure, his team placed participants in fMRI scanners. That abbreviation stands for functional magnetic resonance imaging, and its a technique that identifies shifts in oxygen use across the brain. When people lay in fMRI scanners, researchers can locate which parts of their brain are relatively active when they report feeling happy, or when they tell researchers they are experiencing pain.When Clauws team exerted 5kg of pressure on peoples thumbnails, both groups of volunteers people with and without fibromyalgia reported feeling pain. Simultaneously with these reports, pain centers in their brains lit up, amygdalas and anterior insulas firing in response to the stimuli.By contrast, when Clauws team exerted very small amounts of pressure on peoples thumbnails, the pain regions of peoples brains were quiet, showing no increases over baseline, both for people with fibromyalgia and those without. So far, no difference between the two groups.Now for the critical test: what happens when Clauws team exerts just enough pressure for people with fibromyalgia to report pain while those without fibromyalgia remain nonplussed? If people with fibromyalgia are simply complainers, their brains will look like the same as people who dont have the disorder. Neither group will exhibit neurological signs of pain, but people with fibromyalgia will still complain about the pressure on their fingers.But thats not what happened. Instead, in healthy people, the pain centers in their brains remain dim. However, in people with fibromyalgia, the pain centers light up, waves of electricity racing from left to right, from midbrain to the higher cortex, the same pattern of brain activity occurring when they are subjected to dachshund-levels of pressure. This brain activity leads to the provocative conclusion that people with fibromyalgia dont simply have lower thresholds for reporting a stimulus as painful. They have lower thresholds for experiencing pain.As I explored previously, clinicians do a disservice to people with chronic pain whey they look at the location of their pain (the lower back), find no pathology (normal xray), and tell them theres nothing wrong. As Clauws study shows, the problem could be hypersensitive pain pathways in the persons brain.
    0 Comments 0 Shares
  • TSMC's 2nm chips near production, iPhone 18 Pro likely to be first to adopt
    The big picture: New reports suggest that TSMC's upcoming 2nm semiconductors are almost ready for prime time, with indications of very positive yields. Apple will likely unveil the first 2nm product in 2026, and other tech giants are already in line for TSMC's next bleeding-edge node. The Commercial Times reports that TSMC is set to begin taking orders for wafers built on its 2nm N2 node process next week. The semiconductor giant is likely on schedule to start mass production later this year, with the iPhone 18 Pro's A20 chip bringing the node to consumers in late 2026.TSMC's Kaohsiung plant will hold a production expansion ceremony on March 31, with orders scheduled to begin the next day. Progress has advanced smoothly, as the company's 2nm semiconductors reached 60 percent yields late last year and analyst Ming-Chi Kuo reports that it has since advanced well beyond that. Pursuing an aggressive strategy, TSMC aims to produce 50,000 wafers per week by the end of 2025.Unsurprisingly, Apple is first in line to adopt 2nm semiconductors for its chips. The company's iPhone 15 Pro was the first consumer device to utilize TSMC's 3nm node, and the iPhone 18 Pro will follow that trend late next year. Standard iPhone 18 models will likely employ a refined TSMC 3nm process.Other early 2nm adopters include Intel, AMD, Broadcom, and Amazon AWS. Although Intel is receiving some of its chips from TSMC, the company also aims to compete with its 18A node, which is scheduled for tape-out sometime during the first half of 2025. 18A will debut in Intel's Panther Lake laptop CPUs and Clearwater Forest server processors later this year, slightly ahead of TSMC's roadmap.The two upcoming nodes will introduce gate-all-around architectures, which reduce power leakage by managing electrical currents more closely. Power leakage is a growing problem as newer chips become smaller and pack more transistors into smaller spaces. 18A will also feature backside power delivery to improve performance, while TSMC plans to debut its take on the technology with next year's A16 node. // Related StoriesTSMC might charge $30,000 per 2nm wafer, but whether the number represents Apple's discount or the standard price remains unclear. For comparison, Apple currently pays $18,000 per 3nm wafer, but tariffs could raise that price to between $20,000 and $23,000. The rising costs of 3nm and 2nm semiconductors will likely trickle down to consumers.
    0 Comments 0 Shares
CGShares https://cgshares.com