Here are some suggested topics for PhD projects from our group members. These projects are merely suggestions and illustrate the broad interests of the research group. Potential research students are encouraged and welcome to produce their own suggestions in these research areas or other research areas in the field. All applicants are invited to contact the academic associated with the project when making an application.
If you are interested in studying with our Group but need help to determine your supervisor and topic, please feel free to contact Professor Shujun Li, the Group Head, for advice. From time to time, we will offer funded PhD studentships.
Security and Privacy of the Internet of Things (IoT)
Contact: Budi Arief
IoT has the potential to make our live more comfortable and effortless, but IoT devices could also pose new large-scale privacy and security risks that are not fully understood yet. For example, data collected from these devices (with or without authorisation from its owner) could reveal too much information about someone, and criminals might try to exploit this wealth of information in mounting more successful attacks, for example credit card fraud or social engineering attacks leading to identity theft. Furthermore, the abundance of connected, unsecured IoT devices makes it possible to launch a large scale DDoS attack. Therefore, new approaches and techniques for securing IoT devices are needed, which will be the focus of this research.
Behavioural API for secure-by-design IoT systems
Contact: Laura Bocchi, Budi Arief
In this project you will develop a theory and tool for design and implementation of secure IoT systems. The project will centre on Behavioural APIs. Behavioural APIs are abstract specifications of application-level protocols; they go beyond traditional APIs in that they specify conversation patterns among components/services (and not just sets of supported operations). Behavioural API have been successfully applied to guarantee safety (eg., absence of deadlocks) via code generation, verification and monitoring. Security has been explored only to a limited extent in this context.
Technology to Support Perpetrators of Domestic Abuse
Contact: Virginia Franqueira
Domestic abuse and violence (e.g., technology-facilitated intimate partners violence) is on the rise. Research on this topic is centred on victims but a key element to reverse the trend is the support to perpetrators to stop the vicious cycle of abuse. This project aims to investigate, design and implement immersive and interactive computer-based interventions to allow perpetrators to explore and engage in domestic abuse/violence scenarios and experience role playing and alternative behaviours.
Age Verification for Online Services Accessible by Children
Contact: Virginia Franqueira
The UK regulator Ofcom recently published its latest “Children and parents: media use and attitudes report” (2021). It estimates that 42% of 5-12 years old children use social media sites and apps, such as Instagram, Snapchat, Facebook and TikTok, despite they require a minimum age of 13. Age verification (AV) is at the heart of this problematic phenomenon since the most common age verification approach adopted by online platforms accessible to children is self-declared age. This project aims to validate existing forms of AV and its applicability to different age groups, investigate technical and social attributes that could be useful for AV of under 13, and design and implement prototypes for evaluation of novel approaches.
Investigation of techniques for the direct generation of encryption keys from digital systems (ICmetrics)
Contact: Gareth Howells
The digital revolution has transformed the way we create, destroy, share, process and manage information, bringing many benefits in its wake and an ever increasing number of embedded consumer and communication devices are at the heart of this revolution. However, such technology has also increased the opportunities for fraud and other related crimes to be committed. Therefore, as the adoption of such technologies expands, it becomes vital to ensure the integrity and authenticity of electronic digital systems and to manage, control access to and verify their identity. The University of Kent has developed novel techniques for the generation of encryption keys directed from properties associated with the software and/or hardware associated with digital systems, termed ICmetrics. ICmetrics represents an exciting new approach for generating unique identifiers for embedded devices enabling secure encrypted communication between systems potentially significantly reducing both fraudulent activity such as eavesdropping and device cloning. The use of ICMetric authentication represents a novel concept of regulating access to devices and is explicitly aimed at providing protection at the especially vulnerable points where data access is initiated. Specifically, the aim is to investigate appropriate integrated encryption and digital signature facilities to protect data from unauthorised access, forgery and tampering. Project objectives are to evaluate available feature sets for a range of ICmetric measurements and determine those suitable for application in the direct encryption technology and further to develop a set of prototype tools for demonstrating the effectiveness of the proposed approach. The project will explore further the potential of ICmetric technology by the use of novel measurement features and exploitation scenarios such as those associated with Cloud Computing.
Securing Permissioned Blockchains with Trusted Rings
Contact: Gareth Howells
The aim is to investigate the feasibility of enabling practical use of distributed ledger (blockchain) technology for networked transactions by employing novel secure systems techniques as an enabling authentication and authorisation mechanism. Distributed ledger technology currently represents a highly promising approach for addressing the need for secure transactions where no trusted third party is present. However, for a permissioned blockchain, where limited computing power may be available, a problem exists in preventing fake nodes from part-taking in the blockchain process and undermining its integrity. This proposal will address the issue of how to secure permissioned blockchain by employing a proof of stake function derived from the digital signature of the mining and transaction nodes by augmenting the security provided by a traditional mining rotation and proof of work function by integrating device authentication technology based Digital Signatures. The device metric is derived from the operating characteristics of the transaction nodes themselves, significantly incorporating characteristics which are not static in nature but may vary governed by the operating environment and hence not easily forged. The resulting nodes can be adapted to form a “Trusted Ring” where compromise of a subset of the nodes does not compromise the entire ring hence ensuring the integrity of the overall ledger.
How creative are crime-related texts and what does this tell us about cyber crime?
Contact: Shujun Li, Anna Jordanous
The main aim of the PhD project is to investigate if crime-related texts can be evaluated in terms of creativity using automatic metrics. Such a study will help understand how crime-related texts are crafted (by criminals and by automated tools, possibly via a hybrid human-machine teaming approach), how they have evolved over time, how they are perceived by human receivers, and how new methods can be developed to educate people about tactics of cyber criminals. The four tasks of the PhD project will include the following: (1) collecting a large datasets of crime-related texts; (2) developing some objective (automatable) creativity metrics using supervised machine learning, targeted towards evaluating the creativity of crime-related texts (e.g., phishing emails, online hate speech, grooming, cyber bullying, etc.); (3) applying the creativity metrics to the collected data to see how malevolent creativity has evolved over years and for different crimes; (4) exploring the use of generative AI algorithms to create more creative therefore more deceptive crime-related texts.
Cyber security and Cyber Insurance
Contact: Jason Nurse
Cyber insurance is a relatively new field but one that is making significant waves in industry and academia. This project will critically investigate the nature of cyber insurance, how it relates to cyber security, and its interaction with topics such as cyber incidents (including ransomware), data gathering, and incident response. The project will be based on research conducted by the supervisor, e.g., Cyber Insurance and the Cyber Security Challenge (https://kar.kent.ac.uk/89041/) published with RUSI (Royal United Services Institute).
Cyber security and psychology: where do we go from here?
Contact: Jason Nurse
The human aspect of cyber security has become increasingly prominent in research and practice, a reality undoubtedly motivated by the range of cyberattacks / cybercrime that exploit individuals (e.g., phishing, social engineering), and the broader challenge of building secure and usable systems. This project seeks to combine the fields of computing, HCI and psychology to investigate the range of challenges faced by users, designers and implementors in creating systems and environments that are supportive of users. The project may also relate to Business and security culture. The goal will be to understand these challenges and seek to develop novel approaches, methods and techniques to address them. These will encompass technical as well as socio-technical solutions. As there are several different areas in which this project could focus, there background and research interests of the student will shape the research.
Type-based verification for differential privacy
Contact: Vineet Rajani
Enormous amount of data is collected every day, this data could be extremely useful in making several automated decisions. However, often this data cannot be released/used because of privacy concerns. Prior work has already shown that simple aggregation and anonymization is not sufficient to prevent privacy loss of individuals. Differential privacy is a promising approach to this problem which offers a statistical guarantee to an individual’s privacy. Intuitively, a mechanism is differentially private if the probability of obtaining a result with (or without) an individuals data is almost the same. This limits the amount of information that can learned about that individual. The aim of this project is to develop a type-theoretic framework for analysing differential privacy. We will use ideas from cost analysis and information flow control, two well studied but very different domains in formal verification.