ßÏßÏÊÓƵ

School of Engineering and Informatics (for staff and students)

Junior Research Associates

Applications for the JRA scheme will be open from Tuesday 13th February until Monday 25th March 2024.

Junior Research Associates 2024

The Junior Research Associate (JRA) scheme allows selected undergraduate students to take part in a summer research project under the guidance of an academic. Successful applicants will receive a bursary to undertake an eight-week, full-time research project over the summer months, working with academic supervisors to contribute to cutting-edge research across the university. 

The scheme is open to all ßÏßÏÊÓƵ undergraduate students who are in a middle year of study (i.e. have already completed one year of study and not to be in their final year), have a genuine interest in pursuing postgraduate study and have a good academic track record.

Students can either apply with one of the projects below or suggest an original proposal (as long as a member of faculty is willing to supervise).

For further information about the scheme and how to apply visit the JRA webpages.

Applicants must be sponsored by a member of faculty so please contact the project supervisor to dicuss the project and application. Visit our website for a full list of our Informatics faculty

See below for current projects within the Department of Informatics. Click on the project title to reveal a description:

Machine-Learned Network Congestion Control 

Supervisor:  (G.Parisis@sussex.ac.uk)

Multiple users accessing a network must share available resources - bandwidth and buffers. Network congestion is a network state characterised by increased network delay and packet loss rate, because of traffic going through one or more bottleneck links where the required bandwidth exceeds the available one. Network congestion results in severe degradation of users’ quality of experience and must therefore be controlled. Congestion control involves end-hosts, and potentially in-network devices, and aims to maximise resource utilisation while fairly allocating resources among all users. This is commonly done on an end-to-end basis by regulating senders’ transmission rate. Recently, a new learning-based congestion control paradigm has gained traction, with the key argument being that congestion signals and control actions are too complex for humans to interpret and that machine-generated algorithms can provide superior policies compared to human-derived ones. An objective function then guides the learning of the control strategy. Early work in this thread included off-line optimisation of a fixed rule table [1] and online gradient ascent optimisation [2], with later work adopting sequential decision-making optimisation via reinforcement learning (RL) algorithms [3, 4].

RL-based congestion control is still in its infancy and substantial research is required to yield deployable algorithms and respective RL policies. In [5] we have shown that existing approaches fall short when it comes to fairness, a fundamental requirement of congestion control. In this project, we will further explore the concept of fairness in RL-based congestion control through experimentation in both emulated [5] and simulated networks [6]. We will specifically experiment using RayNet [6] a simulation framework that we have developed; RayNet integrates state of the art software in packet level simulations (OMNeT++) and unified computing and RL (Ray/RLlib). We will consider novel approaches in fairness by integrating it as an explicit component of the reward during training; e.g., by employing centralised training or, even, during regular operation by generating fairness signals through in-network telemetry.

[1] Keith Winstein and Hari Balakrishnan. 2013. TCP ex Machina: Computer-generated congestion control. ACM SIGCOMM Computer Communication Review 43, 4 (2013), 123–134.

[2] Mo Dong, Tong Meng, Doron Zarchy, Engin Arslan, Yossi Gilad, Brighten Godfrey, and Michael Schapira. 2018. PCC Vivace:Online-Learning Congestion Control. In Proceedings of USENIX NSDI. 343–356.

[3] Soheil Abbasloo, Chen-Yu Yen, and H Jonathan Chao. 2020. Classic meets modern: A pragmatic learning-based congestion control for the internet. In Proceedings of ACM SIGCOMM. 632–647.

[4] Nathan Jay, Noga Rotman, Brighten Godfrey, Michael Schapira, and Aviv Tamar. 2019. A deep reinforcement learning perspective on Internet congestion control. In Proceedings of ICML. 3050–3059.

[5] L. Giacomoni and G. Parisis, Reinforcement Learning-based Congestion Control: A Systematic Evaluation of Fairness, Efficiency and Responsiveness, in Proc. of IEEE INFOCOM , 2024 (accepted).

[6] L.Giacomoni,B.Benny,andG.Parisis,“RayNet:Asimulationplatform for developing reinforcement learning-driven network protocols,” CoRR, vol. abs/2302.04519, 2023.

[7]

[8]

AI-Based Task Offloading in Vehicular Edge Computing

Supervisor: (N.Magaia@sussex.ac.uk)

With the benefit of partially or entirely offloading computations to the cloud, or to a nearby server or nearby opportunistic computing resources (e.g., onboard computers of parked vehicles), mobile edge computing gives vehicles more powerful capability to run computationally intensive applications (e.g., Augmented Reality). However, a critical challenge emerged: how to select the optimal set of components to offload considering the vehicle's performance, its resource constraints and offloading costs.

This project focuses on developing and evaluating AI algorithms for optimising MEC offloading decisions. Deep Learning and other alternative techniques will be explored to implement integrated solutions.  Evaluation of the developed algorithms will be based on computer simulation.

Relevant web links:

Speech denoising with event based (spiking) neural networks

Supervisors: Prof Thomas Nowotny and Dr James Knight

Mentor: Thomas Shoesmith

In our group, we are working on a project with Intel to develop algorithms using event-based (spiking) neural networks for denoising speech data. Speech denoising is the removal of noise from speech recordings or transmissions and is a key technology in many areas of telecommunications. Intel has recently published a speech denoising challenge [1] and, in our project, we aim to train event-based (spiking) neural networks which can be eventually deployed on the Intel Loihi 2 neuromorphic system [2]. As a JRA you would participate in this research, working on speech pre-processing and exploring aspects of the models in the mlGeNN framework [3].

1. Timcheck, J., Shrestha, S.B., Rubin, D.B.D., Kupryjanow, A., Orchard, G., Pindor, L., Shea, T. and Davies, M., 2023. The intel neuromorphic DNS challenge. Neuromorphic Computing and Engineering, 3(3), p.034005.

2. Davies, M., Srinivasa, N., Lin, T.H., Chinya, G., Cao, Y., Choday, S.H., Dimou, G., Joshi, P., Imam, N., Jain, S. and Liao, Y., 2018. Loihi: A neuromorphic manycore processor with on-chip learning. Ieee Micro, 38(1), pp.82-99.

3. Turner, J.P., Knight, J.C., Subramanian, A. and Nowotny, T., 2022. mlGeNN: accelerating SNN inference using GPU-enabled neural networks. Neuromorphic Computing and Engineering, 2(2), p.024002.

4. , accessed 16/02/2024

School of Engineering and Informatics (for staff and students)

School Office:
School of Engineering and Informatics, ßÏßÏÊÓƵ, Chichester 1 Room 002, Falmer, Brighton, BN1 9QJ
ei@sussex.ac.uk
T 01273 (67) 8195

School Office opening hours: School Office open Monday – Friday 09:00-15:00, phone lines open Monday-Friday 09:00-17:00
School Office location [PDF 1.74MB]