With the rise of artificial intelligence (AI) glasses, concerns about student safety and privacy are growing, yet potential benefits in education are also emerging.
For some context, the most popular AI glasses are currently the Meta Ray-Ban Displays. Meta CEO Mark Zuckerberg described them as the first AI glasses with a high-resolution display during a live demo. The glasses come with a neural wristband that responds to tiny muscle movements, along with features such as live subtitles and real-time translation.
Additionally, users can play music, send messages, and take photos or videos all through the glasses’ interface on the lens. After their release to the public on Sept. 30, consumers had mainly positive impressions.
However, after a recording incident on a college campus, students are becoming increasingly concerned about safety and privacy in public settings.
On Oct. 2, at the University of San Francisco, a male individual was reported to have approached women with unwanted comments and inappropriate dating questions while recording them on his Meta sunglasses, according to the advisory sent out by the university’s Department of Public Safety.
Meta has stated that the glasses have an LED light to let others know when they are recording, but students aren’t convinced it’s enough.
“I feel like a light is hard to see when the sun is out and it’s bright outside,” said Carlmont sophomore Sayuri Stock. “It wouldn’t be that noticeable.”
Additionally, when questioned about seeing somebody wearing AI glasses at school, Stock expressed discomfort. “I would feel very uncomfortable and I probably would not approach them,” Stock said.
Beyond that, students are concerned about how the glasses could collect and store personal information.
“I’d like AI to see less of my life than more,” said Carlmont sophomore Andrew Xiong. “I wouldn’t like it to see people I’m talking to, where I live, or businesses that I interact with.”
But according to Meta’s Privacy Policy, photos, videos, audio, and location information are all collected from users of its wearable products, including AI glasses.
“I’m just a little bit wary of AI because I’d like it to gather less of my data,” Xiong said.
Despite these concerns, educators see potential benefits, especially in education.
“It’s an interesting idea, the idea that you can put information even more readily at somebody’s fingertips,” said Scott DeBarger, one of Carlmont’s lead robotics mentors.
For the robotics team, AI glasses could help students by mapping out structures in real life or using augmented reality to plan designs. And in classes, students could record difficult lectures or receive real-time explanations.
Additionally, the live subtitles and translation have numerous benefits, including helping people with certain vision or hearing impairments.
Despite this, while there are new standards for large AI companies in California, no policy or regulations currently exist for the use of AI glasses. However, people agree that the use of this technology in public needs to be monitored.
“I personally don’t think they should be allowed in a school environment,” Stock said.
DeBarger acknowledged the complicated reality of creating fair policy, saying, “The challenge with all policy stuff is to balance competing interests.”
Stock also emphasized the importance of how people use technology and how that determines its impact.
“I feel like some forms of AI and new technology can be beneficial, while some are detrimental, but it mostly just comes down to how it’s used, because every form of technology has positives and negatives,” Stock said.
As technology advances, consumers are left to adapt to the rapidly changing societal norms.
“The rate at which the technology evolves may outpace the rate at which society adjusts,” DeBarger said.
