#elderly
Don’t Mind the Gap in Intergenerational Housing
#realestateandhousingresidential #retirementcommunitiesandassistedliving #elderly #babyboomers #publicandsubsidizedhousing #affordablehousing #sageadvocacyandservicesforlgbtelders #losangeleslgbtcenter #cantinacommunities #mirabellaatasu #lasellvillage #kampungadmiralty #agrihood #oneflushing #stonewallhouse #anitamayrosensteincampus #crotonaseniorresidences #austintex #losangelescalif #bronxnyc #brooklynnyc #santaclaracalif #singapore #tempeariz #auburndalemass #artanddesign #news
Nursing Homes Face Dilemma: Vaccinate Staff or Don’t Get Paid.
#layoffsandjobreductions #coronavirus2019ncov #workplacehazardsandviolations #elderly #nursinghomes #eldercare #vaccinationandimmunization #healthinsuranceandmanagedcare #americanhealthcareassn #centersformedicareandmedicaidservices #unitedstates #yourfeedhealthcare #yourfeedscience #news
1 Shares
One person like that
1 Comments
Twitter’s Algorithm Found to Favor Photos of Young, Pretty White People
Twitter has wrapped its first bounty program for artificial intelligence bias on the platform, and the results have highlighted an issue that has been noted as a problem in the past.
According to a report from CNET, researcher Bogdan Kulynych (who took home the $3,500 prize) has found that an important algorithm on the platform tends to favor faces of people who "look slim and young and with skin that is lighter-colored or with warmer tones." This discovery (which is not exactly new news) shows that the Twitter "saliency" (importance) scoring system can amplify real-world biases and conventional -- and often unrealistic -- beauty expectations.
The company sponsored the bounty program to find problems in the saliency algorithm it employs to crop images shared on the platform so they fit in the preview pane of the Twitter timeline. It was discovered more than a year ago there was a problem with this automated service, and just a few months ago the company announced that it was "axing" AI photo cropping altogether.
2nd place goes to @halt_ai who found the saliency algorithm perpetuated marginalization. For example, images of the elderly and disabled were further marginalized by cropping them out of photos and reinforcing spatial gaze biases.
-- Twitter Engineering (@TwitterEng) August 9, 2021
While the use of AI has taken a lot of grunt work out of messy subjects such as captioning and subtitling videos, identifying spam mail, identifying faces or fingerprints to unlock devices, and more, the thing to remember is these programs are made and trained by real people using real-world data. As such, the data can be biased by real-world problems, so identifying and addressing these AI bias problems has become a booming industry in the computing world.
“The saliency algorithm works by estimating what a person might want to see first within a picture so that our system could determine how to crop an image to an easily viewable size. Saliency models are trained on how the human eye looks at a picture as a method of prioritizing what’s likely to be most important to the most people," writes Twitter software engineering director Rumman Chowdhury.
“The algorithm, trained on human eye-tracking data, predicts a saliency score on all regions in the image and chooses the point with the highest score as the center of the crop.”
This bias was not the only issue discovered with the algorithm during the bounty program, as the algorithm was also "perpetuated marginalization" by cropping people out of images that were disabled, elderly, and even cut out any writing in Arabic. Researchers taking part in the program further found that the light-skinned bias even extends towards the emojis used.
Bogdan Kulynych - Predicted maximum saliency: 3.5501 → 4.7940 (135.04% increase)
Even though the company addressed the AI system's bias, Kulynych's findings show the problem goes even deeper.
"The target model is biased towards the depictions of people that appear slim, young, of light or warm skin color and smooth skin texture, and with stereotypically feminine facial traits. This bias could result in the exclusion of minoritized populations and perpetuation of stereotypical beauty standards in thousands of images."
Twitter hasn't said how soon it will address the algorithm bias (if it will at all), but all of this comes to light as the backlash of "beauty filters" has been mounting, which critics say the filters tend to create an unrealistic standard of beauty in images. It will be interesting to see if the company decides to take an official stance on the topic one way or the other, especially since it has a history of remaining mostly neutral on the content that is shared on the platform.
For those interested, Twitter has published the code for winning entries.
Image credits: Header photo licensed via Depositphotos.
#culture #news #ai #artificialintelligence #bias #elderly #issues #marginalization #racebias #sexism #socialmedia #twitter #twitterbounty
One person like that
Covid Is Especially Risky for People With H.I.V., Large Study Finds
#yourfeedscience #acquiredimmunedeficiencysyndrome #coronavirus2019ncov #clinicaltrials #vaccinationandimmunization #research #immunesystem #elderly #novavaxinc #worldhealthorganization #africa #england #southafrica #yourfeedhealthcare #science #news