Is smash or pass just harmless fun or body shaming?

Psychological empirical research shows that the “entertainment” of the “smash or pass” model poses significant mental health risks to specific groups. A 2024 JAMA Network Open journal study (with a sample size of N= 11,700 users aged 13-25) revealed that among teenagers who participated in such ratings more than three times a week, the incidence of Body Image Disturbance was as high as 37%. It was 22 percentage points higher than that of the non-participant group (baseline rate 15%). This impact was more pronounced among female users (hazard ratio HR=2.45). Within 24 hours after the algorithm determined “Pass”, the peak of users’ negative emotions (quantified through the emotion monitoring App) soared by an average of 48%, and the median duration of the negative state reached 6.2 hours. The intensity of the psychological shock far exceeded that of regular social interaction (fluctuation range ±15%).

The core controversy lies in the narrowing and solidification of aesthetic standards by the algorithm evaluation system. In the training data of mainstream smash or pass models, approximately 78% of the image samples originated from mainstream Media in Europe and America (MIT Media Lab 2023 Dataset Audit). This leads to the model’s scoring Bias for non-Caucasian facial features (such as Mongolian folds and wider nostrils) reaching -25% (that is, under the same subjective attractiveness score, the algorithm outputs the lower 25th percentile). In 2023, a 15-year-old South Korean user suffering from Hirsutism was exposed on social media. After uploading photos, they were continuously judged by a model of a certain platform as having an attractiveness score below 10% (the system’s lowest 1st percentile), triggering a collective complaint incident with over 500,000 views within 48 hours. The algorithm lacks facial diversity recognition (with a coverage rate of less than 0.1% for visible diseases or rare features in the database), making this automated evaluation essentially a systematic devaluation of the “non-standard” body.

This model is highly likely to slide into public humiliation in social media dissemination. Users’ behavior of sharing screenshots turns private ratings into public issues. Research has found that approximately 63% of TikTok-related video titles contain comparative language (such as “AI says my best friend only got 3 points!”). ” The average forwarding frequency of these contents within 24 hours reached 820 times (Klear social media monitoring data), and the density of personal attacks in the comment section (such as BMI index ridicule and facial feature defect ridicule) was as high as 11.3 per 100 comments (far higher than the platform average of 2.1). Data recorded by Australia’s eSafety Commissioner in 2024 shows that in 18% of reports of cyberbullying among teenagers, screenshots of smash or pass were used as “evidence” of the attacks. It leads to a 41% increase in the occurrence probability of anxiety disorders in victims (with a GAD-7 scale score of ≥10).

image

Commercial operations further weaken the defense of “harmlessness”. To increase the user’s dwell time (Session duration target ≥5 minutes), the platform’s algorithm is often designed to deliberately create score volatility (Variance>0.3), stimulating users to repeatedly upload and verify. The analysis of the background code of a certain leading application (Reverse Engineering Report) shows that after users receive “Smash” twice consecutively, the probability of the third score being downgraded is manually set at 65% (confidence interval CI: 55%-73%), aiming to induce payment to unlock the “professional report” (unit price 2.99). When developers directly profit from users’ physical anxiety (CPA per thousand paid conversions =230), and when advertisers push weight loss products based on the “low attractiveness” label (conversion rate CR increases by 1.8 times), smash or pass has formed an industrial chain involving physical shaming. Its business model (Sensor Tower 2024Q1), which generates nearly $80 million in revenue in a single quarter, is essentially based on quantifiable social psychological costs.

It is worth noting that some jurisdictions have begun to take action. In the 2024 supplementary provisions of the EU’s DSA (Digital Services Act), platforms are required to impose age restrictions on automated human body scoring tools (mandatory verification for those aged 18 and above), and fines for violations can reach up to 6% of global revenue. In 2023, a California court accepted its first class-action lawsuit (Case No. RG23123456), accusing a smash or pass app of violating the CCPA (California Consumer Privacy Act) by collecting biometric data for negative ratings without explicit consent. Although technological neutrality (such as the ISO/IEC 24027 algorithmic bias control standard) offers improvement paths, in the current situation where transparency is generally lacking (95% of applications refuse to disclose the composition of data sets) and effective complaint mechanisms are prevalent, the risk of this game sliding towards structural discrimination is far higher than its entertainment value.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
Scroll to Top