The Impact of Cyber Security on Social Media
According to a report released this year by the Identity Theft Resource Center, the number of data breaches tracked in the United States in 2017 hit a high of more than 1,500, up almost 45 percent over 2016. In one incident in 2018, the data of 29 million Facebook users was stolen.
In a report by Statista, large enterprises with over 1000 employees were the hardest hit by cyber-attacks, with each incidence costing affected companies an average of 500 thousand U.S. dollars in 2020. 1001 total data breaches occurred in 2020 alone, and that same year, over 155.8 million individuals were affected.
​
Is social media our friend, or our foe?
At one point, infected sites were baiting for users, who hackers would reel in with their cast. Now social media platforms like Twitter and Discord can congregate millions of users in one place.
"I wouldn't say [social media has] increased the number or variety of attacks; it's just concentrated them." - Marc Seybold, CIO of SUNY College at Old Westbury
On social media websites like Twitter and Youtube, URLs tweeted or commented can be altered to link to malicious websites. Suppose you have a link with an altered query at the end of the path, which posts information to a server of a hacker. If you click it and visit it, you are then at risk of whatever command the hacker has.
​
GabLeaks: What is the most valid means of extracting data?
In an article by Leticia Bode, the right-wing Gab social media website’s user database was leaked onto GabLeaks. The website consists of 70 gigabytes of hacked user accounts, passwords, direct messages, and public and private posts released. Researchers, students, and industry professionals are scrambling to discover the ethics behind looking at data listed on this website. When integrating AI models and cybersecurity, researchers beg the question:
​
What data must we feed into an AI model to extract the most valid, yet ethical, trends?
On one hand, there is, “...purpose and political benefit of the GabLeaks release regardless of whether or not researchers can use any of the data for academic studies.” Information from GabLeaks can foreshadow the outcome of future political events, thus making Gab’s information leakage resourceful for today and the future’s political climate (Miller, 2021).
However, we must consider the severe violations of privacy and consent: if one source of information is extracted with a crawler, while another has personally identifiable information, we must weigh of the validity and the intrusiveness of the former and the ladder (Miller, 2021).
What is social engineering?
Social media outlets like Facebook encourage their users to post information users don’t realize is vulnerable to the public eye. Suppose a hacker learns the maiden name of a user’s mom through social media - they may infiltrate their passwords (McClure, 2010).
Due to this behavior, researchers now study social engineering, defined as the use of deceit to manipulate individuals into disclosing confidential data. It is one of the most common types of threats social network users face. Researchers study:
-
the involvement users have in their network
-
the motivation to use said network
-
the competence users have in dealing with threats on the network
AI is changing the way social engineering works​, making users more vulnerable to data leakage and mental manipulation. Exploits like GabLeaks can very well be replicated.
What is social engineering?
Social media outlets like Facebook encourage their users to post information users don’t realize is vulnerable to the public eye. Suppose a hacker learns the maiden name of a user’s mom through social media - they may infiltrate their passwords (McClure, 2010).
Due to this behavior, researchers now study social engineering, defined as the use of deceit to manipulate individuals into disclosing confidential data. It is one of the most common types of threats social network users face. Researchers study:
-
the involvement users have in their network
-
the motivation to use said network
-
the competence users have in dealing with threats on the network
AI is changing the way social engineering works​, making users more vulnerable to data leakage and mental manipulation.
How do institutions mitigate exploits?
Institutions can attempt to mitigate the exploitations of hackers with numerous means, but even those means have their considerable issues. Consider these points listed in a study by Ann McClure:
-
Access Control: Colleges and universities may be able to successfully segment their network and protect their faculty research; however, it takes money and time to be executed.
-
Data Monitoring: Blocking websites and monitoring user activity via monitoring system can protect users. However, this is risky since:
-
users may rebel for the right to privacy
-
users may incidentally reveal personal information. If a hacker penetrates the monitoring system, they can view it themselves.
-
-
Traffic Control: Institutions can either block a program installments or alert the user of the program’s maliciousness. This may be impractical, since there are always loopholes users can take to download programs
How can we best assure user safety? What will AI do for us?
The best means of assuring user safety and security is by telling users where to secure their data. Institutions blocking websites, segmenting networks, and monitoring user activity might be short term solutions, but these solutions do not teach users the best practices to protect their information.
​
These days, algorithms track user engagement to collect the links users are most likely to click. Down the line, AI may tell us what links are likely to lead users to malicious websites. Humans can detect this on their own based on instinct and experience - however, someday, machines may be able to detect this at the blink of an eye. Institutions will no longer have to warn or block users throughout their internet pursuits.
​
Moving forward, there are some more questions to ask ourselves as we prod into AI x Cyber Ethics in social media platforms:
-
Should institutions be allowed to monitor user behavior so AI can be trained to prevent malicious risk?
-
Should risky programs be installed by the user so AI can learn how a user responds to it?
-
Should money and time be sacrificed for this sort of study?
The Future...
Cybersecurity has a prevailing impact on social media. Paired with artificial intelligence, our future will carry a dichotomy of mistrust and security. With time, users must be well equipped with the knowledge to handle the technologies which can manipulate them and benefit them.