You are looking at the page of a previous edition of GeekWeek. Click here to access the 2020 edition.
Early botnets used the Internet Relay Chat (IRC) protocol to communicate with their Command and Control (C2) servers. Since then, many different protocols have been used by botnets and other types of malicious software (downloaders, ransomware, remote access Trojans, etc.) to communicate. Some protocols are entirely custom-made and use highly sophisticated encryption, while others masquerade as ordinary Hyper Text Transfer Protocol (HTTP). In this project, participants will use machine learning techniques to develop tools aiming to detect and identify malicious software using the HTTP protocol.
Project participants will be using Big Data technologies to mine a large collection of spam samples to two ends: as a potential source of URLs leading to exploit kits, and as a source for finding phishing waves containing dangerous attachments. Participants will then develop regular expressions to block malware delivery. To this effect, participants will analyze email headers, infrastructure, and indicators of compromise to develop prevention methods. This project aims at raising awareness of countermeasure techniques.
Analyzing a large number of diversified “Logs” while making informed decisions of the required actions to be taken can be challenging. Participants to this project will work together to develop more efficient tools and techniques to mine and analyze logs, identifying all the factors at play and action those logs.
Project participants will be working on automated static and dynamic analysis of malicious Android applications. Dynamic analysis consists of executing the potentially malicious application in a controlled environment. Static analysis, generally more lightweight than dynamic analysis, consists of dissecting the content of the application itself. Specifically, participants will take a deep dive into malware using SSL and attempt to inspect the content of encrypted communications in order to better understand how and why malicious Android applications are using SSL.
2016 is the Year of Ransomware. Over the last two years, it has grown to become the most common cyber threat faced by businesses and individuals. There is a need to better understand ransomware, to identify it more promptly, and to prevent it from compromising the availability of the most critical business documents. In particular, exploit kits remain one of the most common threat vectors used to compromise hosts with ransomware, along with downloaders and email attachments. The invisible nature of the threat from exploit kits, which are used to silently and effectively deliver malware to victim’s workstations, calls for vigilant defence efforts. Project participants will develop techniques to identify and fingerprint exploit kits in order to better understand the threat they represent, and better target remediation and patching efforts against the threats.
Malware authors invest time and effort in developing evasion and anti-reversing techniques to detect the analysis environment, which often include a virtual machine on which malicious software gets purposely installed in order to study its behavior. The best way to avoid virtual machine detection is to simply not use a virtual machine, and use a physical machine (a.k.a. bare metal machine) instead. When using this approach, the challenge is to rapidly restore the machine to the state it was prior to running malicious software. Project participants will leverage open source software to build and deploy bare metal target systems capable of rapidly and automatically reverting state after running malware on it. This system will be used to perform automated mass analysis with a low rate of detection by the subject of the analysis.
Open Source resources contain a wealth of information related to cyber threat actors (actors, campaigns, indicators of compromise, etc.). Project participants will write code to ingest all this information and correlate it with information available in CCIRC’s cyber systems. Take a little bit of human-gathered information, cross-reference it with a massive amount of computer-gathered information, and what you obtain is a larger, more comprehensive map of what the less reputable parts of the Internet look like. Participants will not simply be looking at mapping cyber threats, but also at ways to protect from them. With this in mind, they will also develop a framework to facilitate the sharing of this information.
Project participants will explore the capabilities of diverse technologies used by the community to share indicators of compromise (IOCs). In particular, some participants will focus on the Malware Information Sharing Platform (MISP), while others will be looking at STIX/TAXII. Different standards, different user pools, one common goal: exchange IOCs bi-directionally at machine speed in order to defend against cyber threats. While we all become better at gathering actionable information, and are all willing to share it, effective and timely sharing of information remains a challenge.
A spider is a computer program that automatically follows all links on a web site in order to build a comprehensive map of it. Pair a spider with a malware database, and what you get is MalSpider, a tool to find malware or vulnerabilities on web sites. Project participants will use a REST API to access CCIRC’s databases with a MalSpider and other integration projects, including the integration with SIEMs.
As cybersecurity awareness grows, controlling access to information is becoming an important part of efforts to secure an organization. This project is meant to lay the basis of a collaborative effort to enable the cyber security community to detect and search for leaked information proactively. Leaked information is often published through publically accessible although non-searchable services which constitute the "Deep Web". This project plans to explore and prototype tools that detect sensitive information moving freely on those services.
One could expect that communications between Canadian end points do not need to be routed outside of the country. However, for efficiency purposes or other practical reasons, this is not always the case. However, assessing how much of the Canadian Internet traffic leaves the country somewhere between source and destination is an interesting challenge. Project participants will define a framework to calculate the percentage of traffic that flows via non-Canadian paths for intra-Canada communications to measure the autonomy of the Canadian Internet under normal operation mode.
How healthy is the Canadian cyber space? How well are we doing compared to other countries? Are we getting better or worse? Project participants will combine information from CCIRC’s databases with metadata collected from various other open and proprietary sources with the intent of making the information searchable and produce reports from various point of views: regions, organizations, industry sectors, etc. Participants will develop dashboards that are searchable and filterable, as well as associated timelines, to represent the evolution of Canada's cyber health.