• "A City of a Dawn Calm" - Unreal Engine Cinecmatics

    Hello! It is my second time posting work in the forum.
    “A City of a Dawn Calm” is a recreation of Seoul.
    I remade Seoul city in Cyberpunk style.
    Hope you like it!

    Also, please visit our School…
    #quota #city #dawn #calmquot #unreal
    "A City of a Dawn Calm" - Unreal Engine Cinecmatics
    Hello! It is my second time posting work in the forum. “A City of a Dawn Calm” is a recreation of Seoul. I remade Seoul city in Cyberpunk style. Hope you like it! Also, please visit our School… #quota #city #dawn #calmquot #unreal
    FORUMS.UNREALENGINE.COM
    "A City of a Dawn Calm" - Unreal Engine Cinecmatics
    Hello! It is my second time posting work in the forum. “A City of a Dawn Calm” is a recreation of Seoul. I remade Seoul city in Cyberpunk style. Hope you like it! Also, please visit our School…
    Like
    Love
    Wow
    Sad
    Angry
    573
    2 Comentários 0 Compartilhamentos
  • ‘A Minecraft Movie’ Announces Streaming Premiere Date

    The biggest American film of the year so far, perhaps a little surprisingly, is A Minecraft Movie, the highly meme-able comedy based on the hugely popular series of Minecraft video games.So far, the film, directed by Jared Hess, has grossed over million in theaters worldwide, nearly million more than its closest competition.With little else to prove in theaters, the movie is now headed to streaming, and will premiere on Maxin one week.In the film, a former video game championand a troubled teenagerdiscover a magical object that leads them into the Minecraft world. There, they meet — who else? — Steve, played by Jack Black. The human heroes need to team up to save this strange, blocky universe from the evil Malgosha, a piglin from the fiery Nether realm.Warner Bros.Warner Bros.loading...The film’s quirky sense of humor and highly quotable dialogue helped A Minecraft Movie go viral even before it had hit theaters. Huge crowds of young teens flocked to the theaterto scream the lines back at the screen, copying a trend they’d seen on TikTok. I witnessed it first-hand and, since I don’t use TikTok, I was totally baffled.Kids got so rowdy at some screenings that police had to be called to settle things down.Now that Minecraft will be on Max, you can yell “Flint and steel!” to your heart’s content without having to worry about getting arrested. Chicken jockeys ... start your, uh, chickens.A Minecraft Movie debuts on Max on June 20.Get our free mobile appThe 10 Worst TV Game Shows of All TimeFrom boring to overcomplicated to just plain offensive, we've plumbed the depths of the last few decades of reality game show television to bring you the worst of the worst.Gallery Credit: Emma Stefansky
    #minecraft #movie #announces #streaming #premiere
    ‘A Minecraft Movie’ Announces Streaming Premiere Date
    The biggest American film of the year so far, perhaps a little surprisingly, is A Minecraft Movie, the highly meme-able comedy based on the hugely popular series of Minecraft video games.So far, the film, directed by Jared Hess, has grossed over million in theaters worldwide, nearly million more than its closest competition.With little else to prove in theaters, the movie is now headed to streaming, and will premiere on Maxin one week.In the film, a former video game championand a troubled teenagerdiscover a magical object that leads them into the Minecraft world. There, they meet — who else? — Steve, played by Jack Black. The human heroes need to team up to save this strange, blocky universe from the evil Malgosha, a piglin from the fiery Nether realm.Warner Bros.Warner Bros.loading...The film’s quirky sense of humor and highly quotable dialogue helped A Minecraft Movie go viral even before it had hit theaters. Huge crowds of young teens flocked to the theaterto scream the lines back at the screen, copying a trend they’d seen on TikTok. I witnessed it first-hand and, since I don’t use TikTok, I was totally baffled.Kids got so rowdy at some screenings that police had to be called to settle things down.Now that Minecraft will be on Max, you can yell “Flint and steel!” to your heart’s content without having to worry about getting arrested. Chicken jockeys ... start your, uh, chickens.A Minecraft Movie debuts on Max on June 20.Get our free mobile appThe 10 Worst TV Game Shows of All TimeFrom boring to overcomplicated to just plain offensive, we've plumbed the depths of the last few decades of reality game show television to bring you the worst of the worst.Gallery Credit: Emma Stefansky #minecraft #movie #announces #streaming #premiere
    SCREENCRUSH.COM
    ‘A Minecraft Movie’ Announces Streaming Premiere Date
    The biggest American film of the year so far, perhaps a little surprisingly, is A Minecraft Movie, the highly meme-able comedy based on the hugely popular series of Minecraft video games.So far, the film, directed by Jared Hess, has grossed over $950 million in theaters worldwide, nearly $150 million more than its closest competition. (That would be Lilo & Stitch.) With little else to prove in theaters, the movie is now headed to streaming, and will premiere on Max (soon to be HBO Max again) in one week.In the film, a former video game champion (Jason Momoa) and a troubled teenager (Sebastian Hansen) discover a magical object that leads them into the Minecraft world. There, they meet — who else? — Steve, played by Jack Black. The human heroes need to team up to save this strange, blocky universe from the evil Malgosha, a piglin from the fiery Nether realm.Warner Bros.Warner Bros.loading...The film’s quirky sense of humor and highly quotable dialogue (like “Chicken jockey!” and “I ... am Steve!”) helped A Minecraft Movie go viral even before it had hit theaters. Huge crowds of young teens flocked to the theater (something they don’t do all that often anymore, sadly) to scream the lines back at the screen, copying a trend they’d seen on TikTok. I witnessed it first-hand and, since I don’t use TikTok, I was totally baffled. (I’m so old.)Kids got so rowdy at some screenings that police had to be called to settle things down. (Warner Bros. later help special screenings where screaming back the screen was encouraged.) Now that Minecraft will be on Max, you can yell “Flint and steel!” to your heart’s content without having to worry about getting arrested (unless your neighbors are real narcs). Chicken jockeys ... start your, uh, chickens.A Minecraft Movie debuts on Max on June 20.Get our free mobile appThe 10 Worst TV Game Shows of All TimeFrom boring to overcomplicated to just plain offensive, we've plumbed the depths of the last few decades of reality game show television to bring you the worst of the worst.Gallery Credit: Emma Stefansky
    0 Comentários 0 Compartilhamentos
  • Tanks, guns and face-painting

    Of all the jarring things I’ve witnessed on the National Mall, nothing will beat the image of the first thing I saw after I cleared security at the Army festival: a child, sitting at the controls of an M119A3 Howitzer, being instructed by a soldier on how to aim it, as his red-hatted parents took a photo with the Washington Monument in the background. The primary stated reason for the Grand Military Parade is to celebrate the US Army’s 250th birthday. The second stated reason is to use the event for recruiting purposes. Like other military branches, the Army has struggled to meet its enlistment quotas for over the past decade. And according to very defensive Army spokespeople trying to convince skeptics that the parade was not for Donald Trump’s birthday, there had always been a festival planned on the National Mall that day, and it had been in the works for over two years, and the parade, tacked on just two months ago, was purely incidental. Assuming that their statement was true, I wasn’t quite sure if they had anticipated so many people in blatant MAGA swag in attendance — or how eager they were to bring their children and hand them assault rifles. WASHINGTON, DC - JUNE 14: An Army festival attendee holds a M3 Carl Gustav Recoilless Rifle on June 14, 2025 in Washington, DC. Photo by Anna Moneymaker / Getty ImagesThere had been kid-friendly events planned: an NFL Kids Zone with a photo op with the Washington Commanders’ mascot, a few face-painting booths, several rock-climbing walls. But they were dwarfed, literally, by dozens of war machines parked along the jogging paths: massive tanks, trucks with gun-mounted turrets, assault helicopters, many of them currently used in combat, all with helpful signs explaining the history of each vehicle, as well as the guns and ammo it could carry. And the families — wearing everything from J6 shirts to Vineyard Vines — were drawn more to the military vehicles, all-too-ready to place their kids in the cockpit of an AH-1F Cobra 998 helicopter as they pretended to aim the nose-mounted 3-barrelled Gatling Cannon. Parents told their children to smile as they poked their little heads out of the hatch of an M1135 Stryker armored vehicle; reminded them to be patient as they waited in line to sit inside an M109A7 self-propelled Howitzer with a 155MM rifled cannon.Attendees look at a military vehicle on display. Bloomberg via Getty ImagesBut seeing a kid’s happiness of being inside a big thing that goes boom was nothing compared to the grownups’ faces when they got the chance to hold genuine military assault rifles — especially the grownups who had made sure to wear Trump merch during the Army’s birthday party.It seemed that not even a free Army-branded Bluetooth speaker could compare to how fucking sick the modded AR-15 was. Attendees were in raptures over the Boston Dynamics robot dog gun, the quadcopter drone gun, or really any of the other guns available.RelatedHowever many protesters made it out to DC, they were dwarfed by thousands of people winding down Constitution Avenue to enter the parade viewing grounds: lots of MAGA heads, lots of foreign tourists, all people who really just like to see big, big tanks. “Angry LOSERS!” they jeered at the protesters.and after walking past them, crossing the bridge, winding through hundreds of yards of metal fencing, Funneling through security, crossing a choked pedestrian bridge over Constitution Ave, I was finally dumped onto the parade viewing section: slightly muggy and surprisingly navigable. But whatever sluggishness the crowd was feeling, it would immediately dissipate the moment a tank turned the corner — and the music started blasting.Americans have a critical weakness for 70s and 80s rock, and this crowd seemed more than willing to look past the questionable origins of the parade so long as the soundtrack had a sick guitar solo. An M1 Abrams tank driving past you while Barracuda blasts on a tower of speakers? Badass. Black Hawk helicopters circling the Washington Monument and disappearing behind the African-American history museum, thrashing your head to “separate ways” by Journey? Fucking badass. ANOTHER M1 ABRAMS TANK?!?!! AND TO FORTUNATE SON??!?!? “They got me fucking hooked,” a young redheaded man said behind me as the crowd screamed for the waving drivers.Members of the U.S. Army drive Bradley Fighting Vehicles in the 250th birthday parade on June 14, 2025 in Washington, DC. Getty ImagesWhen you listen to the hardest fucking rock soundtrack long enough, and learn more about how fucking sick the Bradley Fighting Vehicles streaming by you are, an animalistic hype takes over you — enough to drown out all the nationwide anger about the parade, the enormity of Trump’s power grab, the fact that two Minnesota Democratic lawmakers were shot in their homes just that morning, the riot police roving the streets of LA.It helped that it didn’t rain. It helped that the only people at the parade were the diehards who didn’t care if they were rained out. And by the end of the parade, they didn’t even bother to stay for Trump’s speech, beelining back to the bridge at the first drop of rain.The only thing that mattered to this crowd inside the security perimeter — more than the Army’s honor and history, and barely more than Trump himself — was firepower, strength, hard rock, and America’s unparalleled, world-class ability to kill.See More:
    #tanks #guns #facepainting
    Tanks, guns and face-painting
    Of all the jarring things I’ve witnessed on the National Mall, nothing will beat the image of the first thing I saw after I cleared security at the Army festival: a child, sitting at the controls of an M119A3 Howitzer, being instructed by a soldier on how to aim it, as his red-hatted parents took a photo with the Washington Monument in the background. The primary stated reason for the Grand Military Parade is to celebrate the US Army’s 250th birthday. The second stated reason is to use the event for recruiting purposes. Like other military branches, the Army has struggled to meet its enlistment quotas for over the past decade. And according to very defensive Army spokespeople trying to convince skeptics that the parade was not for Donald Trump’s birthday, there had always been a festival planned on the National Mall that day, and it had been in the works for over two years, and the parade, tacked on just two months ago, was purely incidental. Assuming that their statement was true, I wasn’t quite sure if they had anticipated so many people in blatant MAGA swag in attendance — or how eager they were to bring their children and hand them assault rifles. WASHINGTON, DC - JUNE 14: An Army festival attendee holds a M3 Carl Gustav Recoilless Rifle on June 14, 2025 in Washington, DC. Photo by Anna Moneymaker / Getty ImagesThere had been kid-friendly events planned: an NFL Kids Zone with a photo op with the Washington Commanders’ mascot, a few face-painting booths, several rock-climbing walls. But they were dwarfed, literally, by dozens of war machines parked along the jogging paths: massive tanks, trucks with gun-mounted turrets, assault helicopters, many of them currently used in combat, all with helpful signs explaining the history of each vehicle, as well as the guns and ammo it could carry. And the families — wearing everything from J6 shirts to Vineyard Vines — were drawn more to the military vehicles, all-too-ready to place their kids in the cockpit of an AH-1F Cobra 998 helicopter as they pretended to aim the nose-mounted 3-barrelled Gatling Cannon. Parents told their children to smile as they poked their little heads out of the hatch of an M1135 Stryker armored vehicle; reminded them to be patient as they waited in line to sit inside an M109A7 self-propelled Howitzer with a 155MM rifled cannon.Attendees look at a military vehicle on display. Bloomberg via Getty ImagesBut seeing a kid’s happiness of being inside a big thing that goes boom was nothing compared to the grownups’ faces when they got the chance to hold genuine military assault rifles — especially the grownups who had made sure to wear Trump merch during the Army’s birthday party.It seemed that not even a free Army-branded Bluetooth speaker could compare to how fucking sick the modded AR-15 was. Attendees were in raptures over the Boston Dynamics robot dog gun, the quadcopter drone gun, or really any of the other guns available.RelatedHowever many protesters made it out to DC, they were dwarfed by thousands of people winding down Constitution Avenue to enter the parade viewing grounds: lots of MAGA heads, lots of foreign tourists, all people who really just like to see big, big tanks. “Angry LOSERS!” they jeered at the protesters.and after walking past them, crossing the bridge, winding through hundreds of yards of metal fencing, Funneling through security, crossing a choked pedestrian bridge over Constitution Ave, I was finally dumped onto the parade viewing section: slightly muggy and surprisingly navigable. But whatever sluggishness the crowd was feeling, it would immediately dissipate the moment a tank turned the corner — and the music started blasting.Americans have a critical weakness for 70s and 80s rock, and this crowd seemed more than willing to look past the questionable origins of the parade so long as the soundtrack had a sick guitar solo. An M1 Abrams tank driving past you while Barracuda blasts on a tower of speakers? Badass. Black Hawk helicopters circling the Washington Monument and disappearing behind the African-American history museum, thrashing your head to “separate ways” by Journey? Fucking badass. ANOTHER M1 ABRAMS TANK?!?!! AND TO FORTUNATE SON??!?!? “They got me fucking hooked,” a young redheaded man said behind me as the crowd screamed for the waving drivers.Members of the U.S. Army drive Bradley Fighting Vehicles in the 250th birthday parade on June 14, 2025 in Washington, DC. Getty ImagesWhen you listen to the hardest fucking rock soundtrack long enough, and learn more about how fucking sick the Bradley Fighting Vehicles streaming by you are, an animalistic hype takes over you — enough to drown out all the nationwide anger about the parade, the enormity of Trump’s power grab, the fact that two Minnesota Democratic lawmakers were shot in their homes just that morning, the riot police roving the streets of LA.It helped that it didn’t rain. It helped that the only people at the parade were the diehards who didn’t care if they were rained out. And by the end of the parade, they didn’t even bother to stay for Trump’s speech, beelining back to the bridge at the first drop of rain.The only thing that mattered to this crowd inside the security perimeter — more than the Army’s honor and history, and barely more than Trump himself — was firepower, strength, hard rock, and America’s unparalleled, world-class ability to kill.See More: #tanks #guns #facepainting
    WWW.THEVERGE.COM
    Tanks, guns and face-painting
    Of all the jarring things I’ve witnessed on the National Mall, nothing will beat the image of the first thing I saw after I cleared security at the Army festival: a child, sitting at the controls of an M119A3 Howitzer, being instructed by a soldier on how to aim it, as his red-hatted parents took a photo with the Washington Monument in the background. The primary stated reason for the Grand Military Parade is to celebrate the US Army’s 250th birthday. The second stated reason is to use the event for recruiting purposes. Like other military branches, the Army has struggled to meet its enlistment quotas for over the past decade. And according to very defensive Army spokespeople trying to convince skeptics that the parade was not for Donald Trump’s birthday, there had always been a festival planned on the National Mall that day, and it had been in the works for over two years, and the parade, tacked on just two months ago, was purely incidental. Assuming that their statement was true, I wasn’t quite sure if they had anticipated so many people in blatant MAGA swag in attendance — or how eager they were to bring their children and hand them assault rifles. WASHINGTON, DC - JUNE 14: An Army festival attendee holds a M3 Carl Gustav Recoilless Rifle on June 14, 2025 in Washington, DC. Photo by Anna Moneymaker / Getty ImagesThere had been kid-friendly events planned: an NFL Kids Zone with a photo op with the Washington Commanders’ mascot, a few face-painting booths, several rock-climbing walls. But they were dwarfed, literally, by dozens of war machines parked along the jogging paths: massive tanks, trucks with gun-mounted turrets, assault helicopters, many of them currently used in combat, all with helpful signs explaining the history of each vehicle, as well as the guns and ammo it could carry. And the families — wearing everything from J6 shirts to Vineyard Vines — were drawn more to the military vehicles, all-too-ready to place their kids in the cockpit of an AH-1F Cobra 998 helicopter as they pretended to aim the nose-mounted 3-barrelled Gatling Cannon. Parents told their children to smile as they poked their little heads out of the hatch of an M1135 Stryker armored vehicle; reminded them to be patient as they waited in line to sit inside an M109A7 self-propelled Howitzer with a 155MM rifled cannon.Attendees look at a military vehicle on display. Bloomberg via Getty ImagesBut seeing a kid’s happiness of being inside a big thing that goes boom was nothing compared to the grownups’ faces when they got the chance to hold genuine military assault rifles — especially the grownups who had made sure to wear Trump merch during the Army’s birthday party. (Some even handed the rifles to their children for their own photo ops.) It seemed that not even a free Army-branded Bluetooth speaker could compare to how fucking sick the modded AR-15 was. Attendees were in raptures over the Boston Dynamics robot dog gun, the quadcopter drone gun, or really any of the other guns available (except for those historic guns, those were only maybe cool).RelatedHowever many protesters made it out to DC, they were dwarfed by thousands of people winding down Constitution Avenue to enter the parade viewing grounds: lots of MAGA heads, lots of foreign tourists, all people who really just like to see big, big tanks. “Angry LOSERS!” they jeered at the protesters. (“Don’t worry about them,” said one cop, “they lost anyways.”) and after walking past them, crossing the bridge, winding through hundreds of yards of metal fencing, Funneling through security, crossing a choked pedestrian bridge over Constitution Ave, I was finally dumped onto the parade viewing section: slightly muggy and surprisingly navigable. But whatever sluggishness the crowd was feeling, it would immediately dissipate the moment a tank turned the corner — and the music started blasting.Americans have a critical weakness for 70s and 80s rock, and this crowd seemed more than willing to look past the questionable origins of the parade so long as the soundtrack had a sick guitar solo. An M1 Abrams tank driving past you while Barracuda blasts on a tower of speakers? Badass. Black Hawk helicopters circling the Washington Monument and disappearing behind the African-American history museum, thrashing your head to “separate ways” by Journey? Fucking badass. ANOTHER M1 ABRAMS TANK?!?!! AND TO FORTUNATE SON??!?!? “They got me fucking hooked,” a young redheaded man said behind me as the crowd screamed for the waving drivers. (The tank was so badass that the irony of “Fortunate Son” didn’t matter.)Members of the U.S. Army drive Bradley Fighting Vehicles in the 250th birthday parade on June 14, 2025 in Washington, DC. Getty ImagesWhen you listen to the hardest fucking rock soundtrack long enough, and learn more about how fucking sick the Bradley Fighting Vehicles streaming by you are (either from the parade announcer or the tank enthusiast next to you), an animalistic hype takes over you — enough to drown out all the nationwide anger about the parade, the enormity of Trump’s power grab, the fact that two Minnesota Democratic lawmakers were shot in their homes just that morning, the riot police roving the streets of LA.It helped that it didn’t rain. It helped that the only people at the parade were the diehards who didn’t care if they were rained out. And by the end of the parade, they didn’t even bother to stay for Trump’s speech, beelining back to the bridge at the first drop of rain.The only thing that mattered to this crowd inside the security perimeter — more than the Army’s honor and history, and barely more than Trump himself — was firepower, strength, hard rock, and America’s unparalleled, world-class ability to kill.See More:
    0 Comentários 0 Compartilhamentos
  • Pay for Performance -- How Do You Measure It?

    More enterprises have moved to pay-for-performance salary and promotion models that measure progress toward goals -- but how do you measure goals for a maintenance programmer who barrels through a request backlog but delivers marginal value for the business, or for a business analyst whose success is predicated on forging intangibles like trust and cooperation with users so things can get done? It’s an age-old question facing companies, now that 77% of them use some type of pay-for-performance model. What are some popular pay-for-performance use cases? A factory doing piece work that pays employees based upon the number of items they assemble. A call center that pays agents based on how many calls they complete per day. A bank teller who gets rewarded for how many customers they sign up for credit cards. An IT project team that gets a bonus for completing a major project ahead of schedule. The IT example differs from the others, because it depends on team and not individual execution, but there nevertheless is something tangible to measure. The other use cases are more clearcut -- although they don’t account for pieces in the plant that were poorly assembled in haste to make quota and had to be reworked, or a call center agent who pushes calls off to someone else so they can end their calls in six minutes or less, or the teller who signs up X number of customers for credit cards, although two-thirds of them never use the credit card they signed up for. Related:In short, there are flaws in pay-for-performance models just as there are in other types of compensation models that organizations use. So, what’s the best path for IT for CIOs who want to implement pay for performance? One approach is to measure pay for performance based upon four key elements: hard results, effort, skill, and communications. The mix of these elements will vary, depending on the type of position each IT staff member performs. Here are two examples of pay per performance by position: 1. Computer maintenance programmers and help desk specialists Historically, IT departments have used hard numbers like how many open requests a computer maintenance programmer has closed, or how many calls a help desk employee has solved. There is merit in using hard results, and hard results should be factored into performance reviews for these individuals -- but hard numbers don’t tell the whole story.  For example, how many times has a help desk agent gone the extra mile with a difficult user or software bug, taking the time to see the entire process through until it is thoroughly solved? lf the issue was of a global nature, did the Help Desk agent follow up by letting others who use the application know that a bug was fixed? For the maintenance programmer who has completed the most open requests, which of these requests really solved a major business pain point? For both help desk and maintenance programming employees, were the changes and fixes properly documented and communicated to everyone with a need to know? And did these employees demonstrate the skills needed to solve their issues? Related:It’s difficult to capture hard results on elements like effort, communication and skills, but one way to go about it is to survey user departments on individual levels of service and effectiveness. From there, it’s up to IT managers to determinate the “mix” of hard results, effort, communication and skills on which the employee will be evaluated, and to communicate upfront to the employee what the pay for performance assessment will be based on. 2. Business analysts and trainers Business analysts and trainers are difficult to quantify in pay for performance models because so much of their success depends upon other people. A business analyst can know everything there is to know about a particular business area and its systems, but if the analyst is working with unresponsive users, or lacks the soft skills needed to communicate with users, the pay for performance can’t be based upon the technology skillset alone.  Related:IT trainers face a somewhat different dilemma when it  comes to performance evaluation: they can produce the training that new staff members need before staff is deployed on key projects,  but if a project gets delayed and this causes trainees to lose the knowledge that they learned, there is little the trainer can do aside from offering a refresher course. Can pay for performance be used for positions like these? It’s a mixed answer. Yes, pay per performance can be used for trainers, based upon how many individuals the trainer trains and how many new courses the trainer obtains or develops. These are the hard results. However, since so much of training’s execution depends upon other people downstream, like project managers who must start projects on time so new skills aren’t lost,  managers of training should also consider pay for performance elements such as effort, skills and communication.  In sum, for both business analysts and trainers, there are hard results that can be factored into a pay for performance formula, but there is also a need to survey each position’s “customers” -- those individualswho utilized the business analyst’s or trainer’s skills and products to accomplish their respective objectives in projects and training. Were these user-customers satisfied?  Summary Remarks The value that IT employees contribute to overall IT and to the business at large is a combination of tangible and intangible results. Pay for performance models are well suited to gauge tangible outcomes, but they fall short when it comes to the intangibles that could be just as important. Many years ago, when Pat Riley was coaching the Los Angeles Lakers, an interviewer asked what type of metrics he used when he measured the effectiveness of individual players on the basketball court. Was it the number of points, rebounds, or assists? Riley said he used an “effort" index. For example, how many times did a player go up to get a rebound, even if he didn’t end up with the ball? Riley said the effort individual players exhibited mattered, because even if they didn’t get the rebound, they were creating situations so someone else on the team could. IT is similar. It’s why OKR International, a performance consultancy, stated “Intangibles often create or destroy value quietly -- until their impact is too big to ignore. In the long run, they are the unseen levers that determine whether strategy thrives or withers.”  What CIOs and IT leadership can do when they use pay for performance is to assure that hard results, effort, communications and skills are appropriately blended for each IT staff position, and its responsibilities and realities -- because you can’t attach a numerical measurement to everything -- but you can observe visible changes that begin to manifest when a business analyst turns around what has been a hostile relationship with a user department and you begin to get things done. 
    #pay #performance #how #you #measure
    Pay for Performance -- How Do You Measure It?
    More enterprises have moved to pay-for-performance salary and promotion models that measure progress toward goals -- but how do you measure goals for a maintenance programmer who barrels through a request backlog but delivers marginal value for the business, or for a business analyst whose success is predicated on forging intangibles like trust and cooperation with users so things can get done? It’s an age-old question facing companies, now that 77% of them use some type of pay-for-performance model. What are some popular pay-for-performance use cases? A factory doing piece work that pays employees based upon the number of items they assemble. A call center that pays agents based on how many calls they complete per day. A bank teller who gets rewarded for how many customers they sign up for credit cards. An IT project team that gets a bonus for completing a major project ahead of schedule. The IT example differs from the others, because it depends on team and not individual execution, but there nevertheless is something tangible to measure. The other use cases are more clearcut -- although they don’t account for pieces in the plant that were poorly assembled in haste to make quota and had to be reworked, or a call center agent who pushes calls off to someone else so they can end their calls in six minutes or less, or the teller who signs up X number of customers for credit cards, although two-thirds of them never use the credit card they signed up for. Related:In short, there are flaws in pay-for-performance models just as there are in other types of compensation models that organizations use. So, what’s the best path for IT for CIOs who want to implement pay for performance? One approach is to measure pay for performance based upon four key elements: hard results, effort, skill, and communications. The mix of these elements will vary, depending on the type of position each IT staff member performs. Here are two examples of pay per performance by position: 1. Computer maintenance programmers and help desk specialists Historically, IT departments have used hard numbers like how many open requests a computer maintenance programmer has closed, or how many calls a help desk employee has solved. There is merit in using hard results, and hard results should be factored into performance reviews for these individuals -- but hard numbers don’t tell the whole story.  For example, how many times has a help desk agent gone the extra mile with a difficult user or software bug, taking the time to see the entire process through until it is thoroughly solved? lf the issue was of a global nature, did the Help Desk agent follow up by letting others who use the application know that a bug was fixed? For the maintenance programmer who has completed the most open requests, which of these requests really solved a major business pain point? For both help desk and maintenance programming employees, were the changes and fixes properly documented and communicated to everyone with a need to know? And did these employees demonstrate the skills needed to solve their issues? Related:It’s difficult to capture hard results on elements like effort, communication and skills, but one way to go about it is to survey user departments on individual levels of service and effectiveness. From there, it’s up to IT managers to determinate the “mix” of hard results, effort, communication and skills on which the employee will be evaluated, and to communicate upfront to the employee what the pay for performance assessment will be based on. 2. Business analysts and trainers Business analysts and trainers are difficult to quantify in pay for performance models because so much of their success depends upon other people. A business analyst can know everything there is to know about a particular business area and its systems, but if the analyst is working with unresponsive users, or lacks the soft skills needed to communicate with users, the pay for performance can’t be based upon the technology skillset alone.  Related:IT trainers face a somewhat different dilemma when it  comes to performance evaluation: they can produce the training that new staff members need before staff is deployed on key projects,  but if a project gets delayed and this causes trainees to lose the knowledge that they learned, there is little the trainer can do aside from offering a refresher course. Can pay for performance be used for positions like these? It’s a mixed answer. Yes, pay per performance can be used for trainers, based upon how many individuals the trainer trains and how many new courses the trainer obtains or develops. These are the hard results. However, since so much of training’s execution depends upon other people downstream, like project managers who must start projects on time so new skills aren’t lost,  managers of training should also consider pay for performance elements such as effort, skills and communication.  In sum, for both business analysts and trainers, there are hard results that can be factored into a pay for performance formula, but there is also a need to survey each position’s “customers” -- those individualswho utilized the business analyst’s or trainer’s skills and products to accomplish their respective objectives in projects and training. Were these user-customers satisfied?  Summary Remarks The value that IT employees contribute to overall IT and to the business at large is a combination of tangible and intangible results. Pay for performance models are well suited to gauge tangible outcomes, but they fall short when it comes to the intangibles that could be just as important. Many years ago, when Pat Riley was coaching the Los Angeles Lakers, an interviewer asked what type of metrics he used when he measured the effectiveness of individual players on the basketball court. Was it the number of points, rebounds, or assists? Riley said he used an “effort" index. For example, how many times did a player go up to get a rebound, even if he didn’t end up with the ball? Riley said the effort individual players exhibited mattered, because even if they didn’t get the rebound, they were creating situations so someone else on the team could. IT is similar. It’s why OKR International, a performance consultancy, stated “Intangibles often create or destroy value quietly -- until their impact is too big to ignore. In the long run, they are the unseen levers that determine whether strategy thrives or withers.”  What CIOs and IT leadership can do when they use pay for performance is to assure that hard results, effort, communications and skills are appropriately blended for each IT staff position, and its responsibilities and realities -- because you can’t attach a numerical measurement to everything -- but you can observe visible changes that begin to manifest when a business analyst turns around what has been a hostile relationship with a user department and you begin to get things done.  #pay #performance #how #you #measure
    WWW.INFORMATIONWEEK.COM
    Pay for Performance -- How Do You Measure It?
    More enterprises have moved to pay-for-performance salary and promotion models that measure progress toward goals -- but how do you measure goals for a maintenance programmer who barrels through a request backlog but delivers marginal value for the business, or for a business analyst whose success is predicated on forging intangibles like trust and cooperation with users so things can get done? It’s an age-old question facing companies, now that 77% of them use some type of pay-for-performance model. What are some popular pay-for-performance use cases? A factory doing piece work that pays employees based upon the number of items they assemble. A call center that pays agents based on how many calls they complete per day. A bank teller who gets rewarded for how many customers they sign up for credit cards. An IT project team that gets a bonus for completing a major project ahead of schedule. The IT example differs from the others, because it depends on team and not individual execution, but there nevertheless is something tangible to measure. The other use cases are more clearcut -- although they don’t account for pieces in the plant that were poorly assembled in haste to make quota and had to be reworked, or a call center agent who pushes calls off to someone else so they can end their calls in six minutes or less, or the teller who signs up X number of customers for credit cards, although two-thirds of them never use the credit card they signed up for. Related:In short, there are flaws in pay-for-performance models just as there are in other types of compensation models that organizations use. So, what’s the best path for IT for CIOs who want to implement pay for performance? One approach is to measure pay for performance based upon four key elements: hard results, effort, skill, and communications. The mix of these elements will vary, depending on the type of position each IT staff member performs. Here are two examples of pay per performance by position: 1. Computer maintenance programmers and help desk specialists Historically, IT departments have used hard numbers like how many open requests a computer maintenance programmer has closed, or how many calls a help desk employee has solved. There is merit in using hard results, and hard results should be factored into performance reviews for these individuals -- but hard numbers don’t tell the whole story.  For example, how many times has a help desk agent gone the extra mile with a difficult user or software bug, taking the time to see the entire process through until it is thoroughly solved? lf the issue was of a global nature, did the Help Desk agent follow up by letting others who use the application know that a bug was fixed? For the maintenance programmer who has completed the most open requests, which of these requests really solved a major business pain point? For both help desk and maintenance programming employees, were the changes and fixes properly documented and communicated to everyone with a need to know? And did these employees demonstrate the skills needed to solve their issues? Related:It’s difficult to capture hard results on elements like effort, communication and skills, but one way to go about it is to survey user departments on individual levels of service and effectiveness. From there, it’s up to IT managers to determinate the “mix” of hard results, effort, communication and skills on which the employee will be evaluated, and to communicate upfront to the employee what the pay for performance assessment will be based on. 2. Business analysts and trainers Business analysts and trainers are difficult to quantify in pay for performance models because so much of their success depends upon other people. A business analyst can know everything there is to know about a particular business area and its systems, but if the analyst is working with unresponsive users, or lacks the soft skills needed to communicate with users, the pay for performance can’t be based upon the technology skillset alone.  Related:IT trainers face a somewhat different dilemma when it  comes to performance evaluation: they can produce the training that new staff members need before staff is deployed on key projects,  but if a project gets delayed and this causes trainees to lose the knowledge that they learned, there is little the trainer can do aside from offering a refresher course. Can pay for performance be used for positions like these? It’s a mixed answer. Yes, pay per performance can be used for trainers, based upon how many individuals the trainer trains and how many new courses the trainer obtains or develops. These are the hard results. However, since so much of training’s execution depends upon other people downstream, like project managers who must start projects on time so new skills aren’t lost,  managers of training should also consider pay for performance elements such as effort (has the trainer consistently gone the extra mile to make things work?), skills and communication.  In sum, for both business analysts and trainers, there are hard results that can be factored into a pay for performance formula, but there is also a need to survey each position’s “customers” -- those individuals (and their managers) who utilized the business analyst’s or trainer’s skills and products to accomplish their respective objectives in projects and training. Were these user-customers satisfied?  Summary Remarks The value that IT employees contribute to overall IT and to the business at large is a combination of tangible and intangible results. Pay for performance models are well suited to gauge tangible outcomes, but they fall short when it comes to the intangibles that could be just as important. Many years ago, when Pat Riley was coaching the Los Angeles Lakers, an interviewer asked what type of metrics he used when he measured the effectiveness of individual players on the basketball court. Was it the number of points, rebounds, or assists? Riley said he used an “effort" index. For example, how many times did a player go up to get a rebound, even if he didn’t end up with the ball? Riley said the effort individual players exhibited mattered, because even if they didn’t get the rebound, they were creating situations so someone else on the team could. IT is similar. It’s why OKR International, a performance consultancy, stated “Intangibles often create or destroy value quietly -- until their impact is too big to ignore. In the long run, they are the unseen levers that determine whether strategy thrives or withers.”  What CIOs and IT leadership can do when they use pay for performance is to assure that hard results, effort, communications and skills are appropriately blended for each IT staff position, and its responsibilities and realities -- because you can’t attach a numerical measurement to everything -- but you can observe visible changes that begin to manifest when a business analyst turns around what has been a hostile relationship with a user department and you begin to get things done. 
    Like
    Love
    Wow
    Angry
    Sad
    166
    0 Comentários 0 Compartilhamentos
  • US lawyer sanctioned after caught using ChatGPT for court brief | Richard Bednar apologized after Utah appeals court discovered false citations, including one nonexistent case.

    The Utah court of appeals has sanctioned a lawyer after he was discovered to have used ChatGPT for a filing he made in which he referenced a nonexistent court case.Earlier this week, the Utah court of appeals made the decision to sanction Richard Bednar over claims that he filed a brief which included false citations.According to court documents reviewed by ABC4, Bednar and Douglas Durbano, another Utah-based lawyer who was serving as the petitioner’s counsel, filed a “timely petition for interlocutory appeal”.Upon reviewing the brief which was written by a law clerk, the respondent’s counsel found several false citations of cases.“It appears that at least some portions of the Petition may be AI-generated, including citations and even quotations to at least one case that does not appear to exist in any legal database (and could only be found in ChatGPT and references to cases that are wholly unrelated to the referenced subject matter,” the respondent’s counsel said in documents reviewed by ABC4.The outlet reports that the brief referenced a case titled “Royer v Nelson”, which did not exist in any legal database.Following the discovery of the false citations, Bednar “acknowledged ‘the errors contained in the petition’ and apologized”, according to a document from the Utah court of appeals, ABC4 reports. It went on to add that during a hearing in April, Bednar and his attorney “acknowledged that the petition contained fabricated legal authority, which was obtained from ChatGPT, and they accepted responsibility for the contents of the petition”.According to Bednar and his attorney, an “unlicensed law clerk” wrote up the brief and Bednar did not “independently check the accuracy” before he made the filing. ABC4 further reports that Durbano was not involved in the creation of the petition and the law clerk responsible for the filing was a law school graduate who was terminated from the law firm.The outlet added that Bednar offered to pay any related attorney fees to “make amends”.In a statement reported by ABC4, the Utah court of appeals said: “We agree that the use of AI in the preparation of pleadings is a legal research tool that will continue to evolve with advances in technology. However, we emphasize that every attorney has an ongoing duty to review and ensure the accuracy of their court filings. In the present case, petitioner’s counsel fell short of their gatekeeping responsibilities as members of the Utah State Bar when they submitted a petition that contained fake precedent generated by ChatGPT.”As a result of the false citations, ABC4 reports that Bednar was ordered to pay the respondent’s attorney fees for the petition and hearing, refund fees to their client for the time used to prepare the filing and attend the hearing, as well as donate to the Utah-based legal non-profit And Justice for All.
    #lawyer #sanctioned #after #caught #using
    US lawyer sanctioned after caught using ChatGPT for court brief | Richard Bednar apologized after Utah appeals court discovered false citations, including one nonexistent case.
    The Utah court of appeals has sanctioned a lawyer after he was discovered to have used ChatGPT for a filing he made in which he referenced a nonexistent court case.Earlier this week, the Utah court of appeals made the decision to sanction Richard Bednar over claims that he filed a brief which included false citations.According to court documents reviewed by ABC4, Bednar and Douglas Durbano, another Utah-based lawyer who was serving as the petitioner’s counsel, filed a “timely petition for interlocutory appeal”.Upon reviewing the brief which was written by a law clerk, the respondent’s counsel found several false citations of cases.“It appears that at least some portions of the Petition may be AI-generated, including citations and even quotations to at least one case that does not appear to exist in any legal database (and could only be found in ChatGPT and references to cases that are wholly unrelated to the referenced subject matter,” the respondent’s counsel said in documents reviewed by ABC4.The outlet reports that the brief referenced a case titled “Royer v Nelson”, which did not exist in any legal database.Following the discovery of the false citations, Bednar “acknowledged ‘the errors contained in the petition’ and apologized”, according to a document from the Utah court of appeals, ABC4 reports. It went on to add that during a hearing in April, Bednar and his attorney “acknowledged that the petition contained fabricated legal authority, which was obtained from ChatGPT, and they accepted responsibility for the contents of the petition”.According to Bednar and his attorney, an “unlicensed law clerk” wrote up the brief and Bednar did not “independently check the accuracy” before he made the filing. ABC4 further reports that Durbano was not involved in the creation of the petition and the law clerk responsible for the filing was a law school graduate who was terminated from the law firm.The outlet added that Bednar offered to pay any related attorney fees to “make amends”.In a statement reported by ABC4, the Utah court of appeals said: “We agree that the use of AI in the preparation of pleadings is a legal research tool that will continue to evolve with advances in technology. However, we emphasize that every attorney has an ongoing duty to review and ensure the accuracy of their court filings. In the present case, petitioner’s counsel fell short of their gatekeeping responsibilities as members of the Utah State Bar when they submitted a petition that contained fake precedent generated by ChatGPT.”As a result of the false citations, ABC4 reports that Bednar was ordered to pay the respondent’s attorney fees for the petition and hearing, refund fees to their client for the time used to prepare the filing and attend the hearing, as well as donate to the Utah-based legal non-profit And Justice for All. #lawyer #sanctioned #after #caught #using
    WWW.THEGUARDIAN.COM
    US lawyer sanctioned after caught using ChatGPT for court brief | Richard Bednar apologized after Utah appeals court discovered false citations, including one nonexistent case.
    The Utah court of appeals has sanctioned a lawyer after he was discovered to have used ChatGPT for a filing he made in which he referenced a nonexistent court case.Earlier this week, the Utah court of appeals made the decision to sanction Richard Bednar over claims that he filed a brief which included false citations.According to court documents reviewed by ABC4, Bednar and Douglas Durbano, another Utah-based lawyer who was serving as the petitioner’s counsel, filed a “timely petition for interlocutory appeal”.Upon reviewing the brief which was written by a law clerk, the respondent’s counsel found several false citations of cases.“It appears that at least some portions of the Petition may be AI-generated, including citations and even quotations to at least one case that does not appear to exist in any legal database (and could only be found in ChatGPT and references to cases that are wholly unrelated to the referenced subject matter,” the respondent’s counsel said in documents reviewed by ABC4.The outlet reports that the brief referenced a case titled “Royer v Nelson”, which did not exist in any legal database.Following the discovery of the false citations, Bednar “acknowledged ‘the errors contained in the petition’ and apologized”, according to a document from the Utah court of appeals, ABC4 reports. It went on to add that during a hearing in April, Bednar and his attorney “acknowledged that the petition contained fabricated legal authority, which was obtained from ChatGPT, and they accepted responsibility for the contents of the petition”.According to Bednar and his attorney, an “unlicensed law clerk” wrote up the brief and Bednar did not “independently check the accuracy” before he made the filing. ABC4 further reports that Durbano was not involved in the creation of the petition and the law clerk responsible for the filing was a law school graduate who was terminated from the law firm.The outlet added that Bednar offered to pay any related attorney fees to “make amends”.In a statement reported by ABC4, the Utah court of appeals said: “We agree that the use of AI in the preparation of pleadings is a legal research tool that will continue to evolve with advances in technology. However, we emphasize that every attorney has an ongoing duty to review and ensure the accuracy of their court filings. In the present case, petitioner’s counsel fell short of their gatekeeping responsibilities as members of the Utah State Bar when they submitted a petition that contained fake precedent generated by ChatGPT.”As a result of the false citations, ABC4 reports that Bednar was ordered to pay the respondent’s attorney fees for the petition and hearing, refund fees to their client for the time used to prepare the filing and attend the hearing, as well as donate $1,000 to the Utah-based legal non-profit And Justice for All.
    0 Comentários 0 Compartilhamentos
  • Why do lawyers keep using ChatGPT?

    Every few weeks, it seems like there’s a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, “bogus AI-generated research.” The details vary, but the throughline is the same: an attorney turns to a large language modellike ChatGPT to help them with legal research, the LLM hallucinates cases that don’t exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven’t they stopped?The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren’t necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don’t understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a “super search engine.” It took submitting a filing with fake citations to reveal that it’s more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense.Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman said. Legal databases and research systems like Westlaw are incorporating AI services.In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they’ve used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research “case law, statutes, forms or sample language for orders.” The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said “exploring the potential for implementing AI” at work is their highest priority. “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents,” one respondent said. But as plenty of recent examples have shown, the documents produced by AI aren’t always accurate, and in some cases aren’t real at all.RelatedIn one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included “significant misrepresentations and misquotations of supposedly pertinent case law and history,” Judge Kathryn Kimball Mizelle, of Florida’s middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times.Mizelle ultimately let Burke’s lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he “assumes sole and exclusive responsibility for these errors.” Rasch said he used the “deep research” feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw’s AI feature.Rasch isn’t alone. Lawyers representing Anthropic recently admitted to using the company’s Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an “inaccurate title and inaccurate authors.” Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock’s filing included “two citation errors, popularly referred to as ‘hallucinations,’” and incorrectly listed authors for another citation. These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. “I read their brief, was persuadedby the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Michael Wilner wrote.Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,” Perlman said.But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,” Perlman said. “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.”Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,” Perlman said.Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He’s also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the “baseline definition” of what deepfakes are and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian at the time. Kolodin said he “may have” discussed his use of ChatGPT with the bill’s main Democratic cosponsor but otherwise wanted it to be “an Easter egg” in the bill. The bill passed into law. Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they’re real.“You don’t just typically send out a junior associate’s work product without checking the citations,” said Kolodin. “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”Kolodin said he uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has “gone down substantially over the past year.” AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs and other AI tools. Lawyers who use AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI, the opinion reads. The guidance advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use — or, in other words, to not assume that an LLM is a “super search engine.” Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states.Perlman is bullish on lawyers’ use of AI. “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he said. “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. “Even with recent advances,” Wilner wrote, “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”See More:
    #why #lawyers #keep #using #chatgpt
    Why do lawyers keep using ChatGPT?
    Every few weeks, it seems like there’s a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, “bogus AI-generated research.” The details vary, but the throughline is the same: an attorney turns to a large language modellike ChatGPT to help them with legal research, the LLM hallucinates cases that don’t exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven’t they stopped?The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren’t necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don’t understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a “super search engine.” It took submitting a filing with fake citations to reveal that it’s more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense.Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman said. Legal databases and research systems like Westlaw are incorporating AI services.In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they’ve used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research “case law, statutes, forms or sample language for orders.” The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said “exploring the potential for implementing AI” at work is their highest priority. “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents,” one respondent said. But as plenty of recent examples have shown, the documents produced by AI aren’t always accurate, and in some cases aren’t real at all.RelatedIn one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included “significant misrepresentations and misquotations of supposedly pertinent case law and history,” Judge Kathryn Kimball Mizelle, of Florida’s middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times.Mizelle ultimately let Burke’s lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he “assumes sole and exclusive responsibility for these errors.” Rasch said he used the “deep research” feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw’s AI feature.Rasch isn’t alone. Lawyers representing Anthropic recently admitted to using the company’s Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an “inaccurate title and inaccurate authors.” Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock’s filing included “two citation errors, popularly referred to as ‘hallucinations,’” and incorrectly listed authors for another citation. These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. “I read their brief, was persuadedby the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Michael Wilner wrote.Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,” Perlman said.But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,” Perlman said. “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.”Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,” Perlman said.Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He’s also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the “baseline definition” of what deepfakes are and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian at the time. Kolodin said he “may have” discussed his use of ChatGPT with the bill’s main Democratic cosponsor but otherwise wanted it to be “an Easter egg” in the bill. The bill passed into law. Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they’re real.“You don’t just typically send out a junior associate’s work product without checking the citations,” said Kolodin. “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”Kolodin said he uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has “gone down substantially over the past year.” AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs and other AI tools. Lawyers who use AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI, the opinion reads. The guidance advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use — or, in other words, to not assume that an LLM is a “super search engine.” Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states.Perlman is bullish on lawyers’ use of AI. “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he said. “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. “Even with recent advances,” Wilner wrote, “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”See More: #why #lawyers #keep #using #chatgpt
    WWW.THEVERGE.COM
    Why do lawyers keep using ChatGPT?
    Every few weeks, it seems like there’s a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, “bogus AI-generated research.” The details vary, but the throughline is the same: an attorney turns to a large language model (LLM) like ChatGPT to help them with legal research (or worse, writing), the LLM hallucinates cases that don’t exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven’t they stopped?The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren’t necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don’t understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a “super search engine.” It took submitting a filing with fake citations to reveal that it’s more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense.Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman said. Legal databases and research systems like Westlaw are incorporating AI services.In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they’ve used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research “case law, statutes, forms or sample language for orders.” The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said “exploring the potential for implementing AI” at work is their highest priority. “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents,” one respondent said. But as plenty of recent examples have shown, the documents produced by AI aren’t always accurate, and in some cases aren’t real at all.RelatedIn one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included “significant misrepresentations and misquotations of supposedly pertinent case law and history,” Judge Kathryn Kimball Mizelle, of Florida’s middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times.Mizelle ultimately let Burke’s lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he “assumes sole and exclusive responsibility for these errors.” Rasch said he used the “deep research” feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw’s AI feature.Rasch isn’t alone. Lawyers representing Anthropic recently admitted to using the company’s Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an “inaccurate title and inaccurate authors.” Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock’s filing included “two citation errors, popularly referred to as ‘hallucinations,’” and incorrectly listed authors for another citation. These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. “I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Michael Wilner wrote.Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,” Perlman said.But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,” Perlman said. “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.” (That said, the cases do at least typically exist.)Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,” Perlman said.Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He’s also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the “baseline definition” of what deepfakes are and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian at the time. Kolodin said he “may have” discussed his use of ChatGPT with the bill’s main Democratic cosponsor but otherwise wanted it to be “an Easter egg” in the bill. The bill passed into law. Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they’re real.“You don’t just typically send out a junior associate’s work product without checking the citations,” said Kolodin. “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”Kolodin said he uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has “gone down substantially over the past year.” AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs and other AI tools. Lawyers who use AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI, the opinion reads. The guidance advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use — or, in other words, to not assume that an LLM is a “super search engine.” Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states.Perlman is bullish on lawyers’ use of AI. “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he said. “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. “Even with recent advances,” Wilner wrote, “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”See More:
    0 Comentários 0 Compartilhamentos
  • What DEI actually does for the economy

    Few issues in the U.S. today are as controversial as diversity, equity, and inclusion—commonly referred to as DEI.

    Although the term didn’t come into common usage until the 21st century, DEI is best understood as the latest stage in a long American project. Its egalitarian principles are seen in America’s founding documents, and its roots lie in landmark 20th-century efforts such as the 1964 Civil Rights Act and affirmative action policies, as well as movements for racial justice, gender equity, disability rights, veterans, and immigrants.

    These movements sought to expand who gets to participate in economic, educational, and civic life. DEI programs, in many ways, are their legacy.

    Critics argue that DEI is antidemocratic, that it fosters ideological conformity, and that it leads to discriminatory initiatives, which they say disadvantage white people and undermine meritocracy. Those defending DEI argue just the opposite: that it encourages critical thinking and promotes democracy—and that attacks on DEI amount to a retreat from long-standing civil rights law.

    Yet missing from much of the debate is a crucial question: What are the tangible costs and benefits of DEI? Who benefits, who doesn’t, and what are the broader effects on society and the economy?

    As a sociologist, I believe any productive conversation about DEI should be rooted in evidence, not ideology. So let’s look at the research.

    Who gains from DEI?

    In the corporate world, DEI initiatives are intended to promote diversity, and research consistently shows that diversity is good for business. Companies with more diverse teams tend to perform better across several key metrics, including revenue, profitability, and worker satisfaction.

    Businesses with diverse workforces also have an edge in innovation, recruitment, and competitiveness, research shows. The general trend holds for many types of diversity, including age, race, and ethnicity, and gender.

    A focus on diversity can also offer profit opportunities for businesses seeking new markets. Two-thirds of American consumers consider diversity when making their shopping choices, a 2021 survey found. So-called “inclusive consumers” tend to be female, younger, and more ethnically and racially diverse. Ignoring their values can be costly: When Target backed away from its DEI efforts, the resulting backlash contributed to a sales decline.

    But DEI goes beyond corporate policy. At its core, it’s about expanding access to opportunities for groups historically excluded from full participation in American life. From this broader perspective, many 20th-century reforms can be seen as part of the DEI arc.

    Consider higher education. Many elite U.S. universities refused to admit women until well into the 1960s and 1970s. Columbia, the last Ivy League university to go co-ed, started admitting women in 1982. Since the advent of affirmative action, women haven’t just closed the gender gap in higher education—they outpace men in college completion across all racial groups. DEI policies have particularly benefited women, especially white women, by expanding workforce access.

    Similarly, the push to desegregate American universities was followed by an explosion in the number of Black college students—a number that has increased by 125% since the 1970s, twice the national rate. With college gates open to more people than ever, overall enrollment at U.S. colleges has quadrupled since 1965. While there are many reasons for this, expanding opportunity no doubt plays a role. And a better-educated population has had significant implications for productivity and economic growth.

    The 1965 Immigration Act also exemplifies DEI’s impact. It abolished racial and national quotas, enabling the immigration of more diverse populations, including from Asia, Africa, southern and eastern Europe, and Latin America. Many of these immigrants were highly educated, and their presence has boosted U.S. productivity and innovation.

    Ultimately, the U.S. economy is more profitable and productive as a result of immigrants.

    What does DEI cost?

    While DEI generates returns for many businesses and institutions, it does come with costs. In 2020, corporate America spent an estimated billion on DEI programs. And in 2023, the federal government spent more than million on DEI, including million by the Department of Health and Human Services and another million by the Department of Defense.

    The government will no doubt be spending less on DEI in 2025. One of President Donald Trump’s first acts in his second term was to sign an executive order banning DEI practices in federal agencies—one of several anti-DEI executive orders currently facing legal challenges. More than 30 states have also introduced or enacted bills to limit or entirely restrict DEI in recent years. Central to many of these policies is the belief that diversity lowers standards, replacing meritocracy with mediocrity.

    But a large body of research disputes this claim. For example, a 2023 McKinsey & Company report found that companies with higher levels of gender and ethnic diversity will likely financially outperform those with the least diversity by at least 39%. Similarly, concerns that DEI in science and technology education leads to lowering standards aren’t backed up by scholarship. Instead, scholars are increasingly pointing out that disparities in performance are linked to built-in biases in courses themselves.

    That said, legal concerns about DEI are rising. The Equal Employment Opportunity Commission and the Department of Justice have recently warned employers that some DEI programs may violate Title VII of the Civil Rights Act of 1964. Anecdotal evidence suggests that reverse discrimination claims, particularly from white men, are increasing, and legal experts expect the Supreme Court to lower the burden of proof needed by complainants for such cases.

    The issue remains legally unsettled. But while the cases work their way through the courts, women and people of color will continue to shoulder much of the unpaid volunteer work that powers corporate DEI initiatives. This pattern raises important equity concerns within DEI itself.

    What lies ahead for DEI?

    People’s fears of DEI are partly rooted in demographic anxiety. Since the U.S. Census Bureau projected in 2008 that non-Hispanic white people would become a minority in the U.S by the year 2042, nationwide news coverage has amplified white fears of displacement.

    Research indicates many white men experience this change as a crisis of identity and masculinity, particularly amid economic shifts such as the decline of blue-collar work. This perception aligns with research showing that white Americans are more likely to believe DEI policies disadvantage white men than white women.

    At the same time, in spite of DEI initiatives, women and people of color are most likely to be underemployed and living in poverty regardless of how much education they attain. The gender wage gap remains stark: In 2023, women working full time earned a median weekly salary of compared with for men—just 83.6% of what men earned. Over a 40-year career, that adds up to hundreds of thousands of dollars in lost earnings. For Black and Latina women, the disparities are even worse, with one source estimating lifetime losses at and million, respectively.

    Racism, too, carries an economic toll. A 2020 analysis from Citi found that systemic racism has cost the U.S. economy trillion since 2000. The same analysis found that addressing these disparities could have boosted Black wages by trillion, added up to billion in lifetime earnings through higher college enrollment, and generated trillion in business revenue, creating 6.1 million jobs annually.

    In a moment of backlash and uncertainty, I believe DEI remains a vital if imperfect tool in the American experiment of inclusion. Rather than abandon it, the challenge now, from my perspective, is how to refine it: grounding efforts not in slogans or fear, but in fairness and evidence.

    Rodney Coates is a professor of critical race and ethnic studies at Miami University.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.
    #what #dei #actually #does #economy
    What DEI actually does for the economy
    Few issues in the U.S. today are as controversial as diversity, equity, and inclusion—commonly referred to as DEI. Although the term didn’t come into common usage until the 21st century, DEI is best understood as the latest stage in a long American project. Its egalitarian principles are seen in America’s founding documents, and its roots lie in landmark 20th-century efforts such as the 1964 Civil Rights Act and affirmative action policies, as well as movements for racial justice, gender equity, disability rights, veterans, and immigrants. These movements sought to expand who gets to participate in economic, educational, and civic life. DEI programs, in many ways, are their legacy. Critics argue that DEI is antidemocratic, that it fosters ideological conformity, and that it leads to discriminatory initiatives, which they say disadvantage white people and undermine meritocracy. Those defending DEI argue just the opposite: that it encourages critical thinking and promotes democracy—and that attacks on DEI amount to a retreat from long-standing civil rights law. Yet missing from much of the debate is a crucial question: What are the tangible costs and benefits of DEI? Who benefits, who doesn’t, and what are the broader effects on society and the economy? As a sociologist, I believe any productive conversation about DEI should be rooted in evidence, not ideology. So let’s look at the research. Who gains from DEI? In the corporate world, DEI initiatives are intended to promote diversity, and research consistently shows that diversity is good for business. Companies with more diverse teams tend to perform better across several key metrics, including revenue, profitability, and worker satisfaction. Businesses with diverse workforces also have an edge in innovation, recruitment, and competitiveness, research shows. The general trend holds for many types of diversity, including age, race, and ethnicity, and gender. A focus on diversity can also offer profit opportunities for businesses seeking new markets. Two-thirds of American consumers consider diversity when making their shopping choices, a 2021 survey found. So-called “inclusive consumers” tend to be female, younger, and more ethnically and racially diverse. Ignoring their values can be costly: When Target backed away from its DEI efforts, the resulting backlash contributed to a sales decline. But DEI goes beyond corporate policy. At its core, it’s about expanding access to opportunities for groups historically excluded from full participation in American life. From this broader perspective, many 20th-century reforms can be seen as part of the DEI arc. Consider higher education. Many elite U.S. universities refused to admit women until well into the 1960s and 1970s. Columbia, the last Ivy League university to go co-ed, started admitting women in 1982. Since the advent of affirmative action, women haven’t just closed the gender gap in higher education—they outpace men in college completion across all racial groups. DEI policies have particularly benefited women, especially white women, by expanding workforce access. Similarly, the push to desegregate American universities was followed by an explosion in the number of Black college students—a number that has increased by 125% since the 1970s, twice the national rate. With college gates open to more people than ever, overall enrollment at U.S. colleges has quadrupled since 1965. While there are many reasons for this, expanding opportunity no doubt plays a role. And a better-educated population has had significant implications for productivity and economic growth. The 1965 Immigration Act also exemplifies DEI’s impact. It abolished racial and national quotas, enabling the immigration of more diverse populations, including from Asia, Africa, southern and eastern Europe, and Latin America. Many of these immigrants were highly educated, and their presence has boosted U.S. productivity and innovation. Ultimately, the U.S. economy is more profitable and productive as a result of immigrants. What does DEI cost? While DEI generates returns for many businesses and institutions, it does come with costs. In 2020, corporate America spent an estimated billion on DEI programs. And in 2023, the federal government spent more than million on DEI, including million by the Department of Health and Human Services and another million by the Department of Defense. The government will no doubt be spending less on DEI in 2025. One of President Donald Trump’s first acts in his second term was to sign an executive order banning DEI practices in federal agencies—one of several anti-DEI executive orders currently facing legal challenges. More than 30 states have also introduced or enacted bills to limit or entirely restrict DEI in recent years. Central to many of these policies is the belief that diversity lowers standards, replacing meritocracy with mediocrity. But a large body of research disputes this claim. For example, a 2023 McKinsey & Company report found that companies with higher levels of gender and ethnic diversity will likely financially outperform those with the least diversity by at least 39%. Similarly, concerns that DEI in science and technology education leads to lowering standards aren’t backed up by scholarship. Instead, scholars are increasingly pointing out that disparities in performance are linked to built-in biases in courses themselves. That said, legal concerns about DEI are rising. The Equal Employment Opportunity Commission and the Department of Justice have recently warned employers that some DEI programs may violate Title VII of the Civil Rights Act of 1964. Anecdotal evidence suggests that reverse discrimination claims, particularly from white men, are increasing, and legal experts expect the Supreme Court to lower the burden of proof needed by complainants for such cases. The issue remains legally unsettled. But while the cases work their way through the courts, women and people of color will continue to shoulder much of the unpaid volunteer work that powers corporate DEI initiatives. This pattern raises important equity concerns within DEI itself. What lies ahead for DEI? People’s fears of DEI are partly rooted in demographic anxiety. Since the U.S. Census Bureau projected in 2008 that non-Hispanic white people would become a minority in the U.S by the year 2042, nationwide news coverage has amplified white fears of displacement. Research indicates many white men experience this change as a crisis of identity and masculinity, particularly amid economic shifts such as the decline of blue-collar work. This perception aligns with research showing that white Americans are more likely to believe DEI policies disadvantage white men than white women. At the same time, in spite of DEI initiatives, women and people of color are most likely to be underemployed and living in poverty regardless of how much education they attain. The gender wage gap remains stark: In 2023, women working full time earned a median weekly salary of compared with for men—just 83.6% of what men earned. Over a 40-year career, that adds up to hundreds of thousands of dollars in lost earnings. For Black and Latina women, the disparities are even worse, with one source estimating lifetime losses at and million, respectively. Racism, too, carries an economic toll. A 2020 analysis from Citi found that systemic racism has cost the U.S. economy trillion since 2000. The same analysis found that addressing these disparities could have boosted Black wages by trillion, added up to billion in lifetime earnings through higher college enrollment, and generated trillion in business revenue, creating 6.1 million jobs annually. In a moment of backlash and uncertainty, I believe DEI remains a vital if imperfect tool in the American experiment of inclusion. Rather than abandon it, the challenge now, from my perspective, is how to refine it: grounding efforts not in slogans or fear, but in fairness and evidence. Rodney Coates is a professor of critical race and ethnic studies at Miami University. This article is republished from The Conversation under a Creative Commons license. Read the original article. #what #dei #actually #does #economy
    WWW.FASTCOMPANY.COM
    What DEI actually does for the economy
    Few issues in the U.S. today are as controversial as diversity, equity, and inclusion—commonly referred to as DEI. Although the term didn’t come into common usage until the 21st century, DEI is best understood as the latest stage in a long American project. Its egalitarian principles are seen in America’s founding documents, and its roots lie in landmark 20th-century efforts such as the 1964 Civil Rights Act and affirmative action policies, as well as movements for racial justice, gender equity, disability rights, veterans, and immigrants. These movements sought to expand who gets to participate in economic, educational, and civic life. DEI programs, in many ways, are their legacy. Critics argue that DEI is antidemocratic, that it fosters ideological conformity, and that it leads to discriminatory initiatives, which they say disadvantage white people and undermine meritocracy. Those defending DEI argue just the opposite: that it encourages critical thinking and promotes democracy—and that attacks on DEI amount to a retreat from long-standing civil rights law. Yet missing from much of the debate is a crucial question: What are the tangible costs and benefits of DEI? Who benefits, who doesn’t, and what are the broader effects on society and the economy? As a sociologist, I believe any productive conversation about DEI should be rooted in evidence, not ideology. So let’s look at the research. Who gains from DEI? In the corporate world, DEI initiatives are intended to promote diversity, and research consistently shows that diversity is good for business. Companies with more diverse teams tend to perform better across several key metrics, including revenue, profitability, and worker satisfaction. Businesses with diverse workforces also have an edge in innovation, recruitment, and competitiveness, research shows. The general trend holds for many types of diversity, including age, race, and ethnicity, and gender. A focus on diversity can also offer profit opportunities for businesses seeking new markets. Two-thirds of American consumers consider diversity when making their shopping choices, a 2021 survey found. So-called “inclusive consumers” tend to be female, younger, and more ethnically and racially diverse. Ignoring their values can be costly: When Target backed away from its DEI efforts, the resulting backlash contributed to a sales decline. But DEI goes beyond corporate policy. At its core, it’s about expanding access to opportunities for groups historically excluded from full participation in American life. From this broader perspective, many 20th-century reforms can be seen as part of the DEI arc. Consider higher education. Many elite U.S. universities refused to admit women until well into the 1960s and 1970s. Columbia, the last Ivy League university to go co-ed, started admitting women in 1982. Since the advent of affirmative action, women haven’t just closed the gender gap in higher education—they outpace men in college completion across all racial groups. DEI policies have particularly benefited women, especially white women, by expanding workforce access. Similarly, the push to desegregate American universities was followed by an explosion in the number of Black college students—a number that has increased by 125% since the 1970s, twice the national rate. With college gates open to more people than ever, overall enrollment at U.S. colleges has quadrupled since 1965. While there are many reasons for this, expanding opportunity no doubt plays a role. And a better-educated population has had significant implications for productivity and economic growth. The 1965 Immigration Act also exemplifies DEI’s impact. It abolished racial and national quotas, enabling the immigration of more diverse populations, including from Asia, Africa, southern and eastern Europe, and Latin America. Many of these immigrants were highly educated, and their presence has boosted U.S. productivity and innovation. Ultimately, the U.S. economy is more profitable and productive as a result of immigrants. What does DEI cost? While DEI generates returns for many businesses and institutions, it does come with costs. In 2020, corporate America spent an estimated $7.5 billion on DEI programs. And in 2023, the federal government spent more than $100 million on DEI, including $38.7 million by the Department of Health and Human Services and another $86.5 million by the Department of Defense. The government will no doubt be spending less on DEI in 2025. One of President Donald Trump’s first acts in his second term was to sign an executive order banning DEI practices in federal agencies—one of several anti-DEI executive orders currently facing legal challenges. More than 30 states have also introduced or enacted bills to limit or entirely restrict DEI in recent years. Central to many of these policies is the belief that diversity lowers standards, replacing meritocracy with mediocrity. But a large body of research disputes this claim. For example, a 2023 McKinsey & Company report found that companies with higher levels of gender and ethnic diversity will likely financially outperform those with the least diversity by at least 39%. Similarly, concerns that DEI in science and technology education leads to lowering standards aren’t backed up by scholarship. Instead, scholars are increasingly pointing out that disparities in performance are linked to built-in biases in courses themselves. That said, legal concerns about DEI are rising. The Equal Employment Opportunity Commission and the Department of Justice have recently warned employers that some DEI programs may violate Title VII of the Civil Rights Act of 1964. Anecdotal evidence suggests that reverse discrimination claims, particularly from white men, are increasing, and legal experts expect the Supreme Court to lower the burden of proof needed by complainants for such cases. The issue remains legally unsettled. But while the cases work their way through the courts, women and people of color will continue to shoulder much of the unpaid volunteer work that powers corporate DEI initiatives. This pattern raises important equity concerns within DEI itself. What lies ahead for DEI? People’s fears of DEI are partly rooted in demographic anxiety. Since the U.S. Census Bureau projected in 2008 that non-Hispanic white people would become a minority in the U.S by the year 2042, nationwide news coverage has amplified white fears of displacement. Research indicates many white men experience this change as a crisis of identity and masculinity, particularly amid economic shifts such as the decline of blue-collar work. This perception aligns with research showing that white Americans are more likely to believe DEI policies disadvantage white men than white women. At the same time, in spite of DEI initiatives, women and people of color are most likely to be underemployed and living in poverty regardless of how much education they attain. The gender wage gap remains stark: In 2023, women working full time earned a median weekly salary of $1,005 compared with $1,202 for men—just 83.6% of what men earned. Over a 40-year career, that adds up to hundreds of thousands of dollars in lost earnings. For Black and Latina women, the disparities are even worse, with one source estimating lifetime losses at $976,800 and $1.2 million, respectively. Racism, too, carries an economic toll. A 2020 analysis from Citi found that systemic racism has cost the U.S. economy $16 trillion since 2000. The same analysis found that addressing these disparities could have boosted Black wages by $2.7 trillion, added up to $113 billion in lifetime earnings through higher college enrollment, and generated $13 trillion in business revenue, creating 6.1 million jobs annually. In a moment of backlash and uncertainty, I believe DEI remains a vital if imperfect tool in the American experiment of inclusion. Rather than abandon it, the challenge now, from my perspective, is how to refine it: grounding efforts not in slogans or fear, but in fairness and evidence. Rodney Coates is a professor of critical race and ethnic studies at Miami University. This article is republished from The Conversation under a Creative Commons license. Read the original article.
    0 Comentários 0 Compartilhamentos
  • "A logo is a kaleidoscope of meaning": why logos are more important than you think

    Brand Impact Awards judge reveals why logos are more than just graphic marks.
    #quota #logo #kaleidoscope #meaningquot #why
    "A logo is a kaleidoscope of meaning": why logos are more important than you think
    Brand Impact Awards judge reveals why logos are more than just graphic marks. #quota #logo #kaleidoscope #meaningquot #why
    WWW.CREATIVEBLOQ.COM
    "A logo is a kaleidoscope of meaning": why logos are more important than you think
    Brand Impact Awards judge reveals why logos are more than just graphic marks.
    0 Comentários 0 Compartilhamentos