Skip to content
Imran ViraniDec 14, 20222 min read

Security Impurities: News of the Week (December 9th -15th)

Medical Outbreach: CommonSpirit latest breach is a worrisome trend

 
We have covered breaches in the medical world many times, but the fact that these events keep happening is incredibly distressing. This week, CommonSpirit Health confirmed that hackers stole the personal data of 623,774 patients during a ransomware attack in October. The non-profit health system is the second largest in the US, with 140 hospitals and over 1,000 care sites across 21 states. The attack disrupted operations and has severely harmed confidence in CommonSpirit’s security practices. The personal data accessed included names, addresses, phone numbers, dates of birth, and unique CommonSpirit IDs. The company has promised to contact all affected individuals. It has not disclosed the ransomware group responsible for the attack.
 
Patients visit medical facilities when they are at their most vulnerable, and falling victim to cybercrime as a result of these visits is unforgivable. It reflects poorly on the criminals and the organizations in charge of protecting patient data. Companies in the medical industry should properly understand how important this data is and do everything they can to protect it.
 

mAIware: How ChatGPT’s ability to write malicious code has worrisome cybersecurity implications

 
Artificial intelligence can be an incredible tool, but it also has its problems when it comes to cybersecurity. While AI cybersecurity systems faltering has been a covered topic on this blog, the idea of AI being behind the attacks is new, scary territory. Researchers have found that OpenAI's ChatGPT tool may be able to write malware.
 
Brendan Dolan-Gavitt, a cybersecurity researcher at NYU, asked ChatGPT to solve a simple challenge. He found it correctly recognized a buffer overflow vulnerability, writing code to exploit the flaw. While ChatGPT is not yet sophisticated enough to write quality code, it is advancing. It may be increasingly relied on by hackers to create malware and phishing campaigns. OpenAI CEO Sam Altman warned that large language models "will be able to write code for real-world software vulnerabilities" in the future. This is a concerning statement, but unfortunately, one that security professionals need to prepare for. Soon, the computer trying to hack into your system may not have a face behind the screen.
 

How low can you go? : North Korean hackers take advantage of Itaewon tragedy

 
When combating cyber attackers, morality should not be an expectation. Sure, some self—described “White Hat” hackers and eco-terrorists argue that they hack for the greater good. However, most threat actors have no problems exploiting people at their worst. Often, they will exploit tragedies that draw large attention from unsuspecting targets.
 
This week, it was learned that the North Korean government-backed hacking group APT37 used the Itaewon tragedy (in which 150 people were killed during a Halloween crowd crush) as a lure to trick South Korean individuals into opening malicious files. The files contained a zero-day vulnerability in Internet Explorer's JScript engine CVE-2022-41128. Because Microsoft Office renders HTML content using Internet Explorer, even users who do not use Internet Explorer as their browser were at risk. APT37’s intentions are unknown, but given their past targets, it is likely that the attack was intended to steal information from compromised PCs or deliver a destructive payload.
 
Microsoft was quick in patching the vulnerability, but this shows the depth that some hackers stoop to. In keeping cyber aware, know that large-scale tragedies such as Itaewon can draw both sympathetic eyes and malicious actors. While there are tears in some eyes, there are dollar signs in theirs.

COMMENTS

RELATED ARTICLES