I am pleased to announce a special issue of MDPI’s IoT journal focusing on Cyber Security and Privacy in IoT which I am jointly guest editing with Dr James Nicholson.
The uptake of IoT devices continues to rise in many sectors. IoT devices, while convenient for the user, also introduce a myriad of security and privacy issues into the space. In order to protect users against security and privacy compromises, we must look at ways of improving users’ awareness of IoT security and privacy as well as better ways of presenting key information for users to act on.
The aim of this Special Issue is to report on cutting edge methods for i.) educating users on IoT threats and/or ii.) tools that support user understanding and action. Examples may include visualisations, auditory interfaces (e.g. sonification), and voice interfaces, although solutions are not limited to these modes. We also encourage exploratory studies reporting on mental modals, possible design guidelines, or future scenarios.
The special issue is now open for submission and articles will be published as soon as they have passed the peer review process, so the sooner you submit, the sooner people can read about your research.
For full details on this special issue and how to submit your manuscript, please visit the call for papers page.
]]>Department of Computer and Information Sciences Northumbria University, Newcastle-upon-Tyne, UK
I am pleased to announce a forthcoming research project to develop a fundamental understanding of the relationship between sonification design and the listener and to stimulate a revitalised agenda for sonification research and practice.
Project RADICAL is funded by a three-year Research Project Grant by the Leverhulme Trust (RPG-2020-113) and will commence in October 2020.
Bringing together researchers in sonification, music practice, listening, aesthetics, and ethnography at Northumbria University and Newcastle University, and using Northumbria’s IKO 3D loudspeaker RADICAL will create a new space for sonification research.
For further information about the project and about two new post-doctoral research posts that we are recruiting to, please see the project RADICAL page.
]]>Department of Computer and Information Sciences Northumbria University, Newcastle-upon-Tyne, UK
13 December, 2019
Does your work cause you to think about the space around you and how a better understanding of the spaces we inhabit and the acoustic and aesthetic properties of those spaces could lead to improvements in the way you think about your own practice?
On Friday 13th December, the Department of Computer and Information Sciences at Northumbria University, Newcastle upon Tyne, UK will be hosting a 1-day cross-disciplinary workshop, lecture and concert. This will be of interest to people involved in understanding how the space around us impacts us in our daily lives. The workshop will be led by Paul Vickers (Northumbria), Gerriet, K. Sharma (Graz, Austria), and Angela McArthur (Queen Mary, University of London). It follows on from the popular workshop on sonification and sonic interaction design for space run by Paul and Gerriet at Soundstack 2019 on 8 November 2019.
The workshop will feature the IKO icosahedral loudspeaker which generates a stunning 3D sound field. Northumbria University has the only IKO in the UK, so this will be a great opportunity to come and see what IKO can do and the collaborative research opportunities it offers.
In the broad field of electroacoustic composition and sound design we are dealing for some time now with spatial sound phenomena that not only come from a direction and head for a vanishing point in the concert or studio space. Rather these phenomena have spatial dimensions like proliferation, width, height etc. forming diverse sound masses that can penetrate, layer, move around each other and define by their properties — space itself. Thus, these phenomena are perceived by composers, scientists and audiences causing ‘something’ we call a shared perceptual space (SPS).
Spatial composition has become a subject to academic curricula, workshops and master classes internationally. It is constantly triggering the development and extension of commercial and academic software solutions for the projection, placement and movement of phantom sources, the reproduction of higher order recordings of “natural” sound fields as well as the creation of so called immersive virtual sound environments.
Moreover, as spatial computer music matures and consolidates within institutions and organizations, it is increasingly involving so-called 3D audio systems which can create auditory virtual environments (AVEs). Quite likely in the very near future AVEs will be part of many people’s everyday life, e.g. in cars, working spaces, intelligent homes, concert halls and computer games.
For the composer, the question arises as to what extent a communicable or self-explanatory composition of plastic sound objects is conceptually, theoretically and at all practically possible when faced with changing architectural space situations, different cultural spatial descriptions, and perceptions. It is therefore a matter of finding parameters for an intersubjective space for the perception of three-dimensional sound phenomena. Is there within the field of space-sound composition a space at the place of the music, where the composer’s perception in the compositional process overlaps with both the engineers’ and audience’s perception? Can at least an approximate circumference of an overlap be described? How and from which sides (linguistic, technical, artistic, etc.) can this field be approached?
In this cross-disciplinary workshop and lecture we are going to investigate different uses of the term SPS in a variety of fields as aesthetic strategies, showing that space has become one of the key aspects in all kinds of scientific and artistic, applied and theoretical disciplines. By discussing examples from music, musicology, sociology, philosophy, architecture and linguistics we are trying to extract variables that can help to formulate a perception based framework for a hybrid model of sound as space.
The workshop is free to attend but space will be strictly limited. Therefore, if you would like to come please email paul[dot]vickers[at]northumbria.ac.uk by 20 November and supply the following information:
25th International Conference on Auditory Display Northumbria University, Newcastle-upon-Tyne, UK
23–27 June, 2019
Theme/Special focus of ICAD 2019: Sonification for Everyday Life.
Digital technology and artificial intelligence are becoming embedded in the objects all around us, from consumer products to the built environment. Everyday life happens where People, Technology, and Place intersect. Our activities and movements are increasingly sensed, digitised and tracked. Of course, the data generated by modern life is a hugely important resource not just for companies who use it for commercial purposes, but it can also be harnessed for the benefit of the individuals it concerns. Sonification research that has hit the news headlines in recent times has often been related to big science done at large publicly funded labs with little impact on the day-to-day lives of people. At ICAD 2019 we want to explore how auditory display technologies and techniques may be used to enhance our everyday lives. From giving people access to what’s going on inside their own bodies, to the human concerns of living in a modern networked and technological city, the range of opportunities for auditory display is wide. The ICAD 2019 committee is seeking papers and extended abstracts that will contribute to knowledge of how sonification can support everyday life.
Important Dates:
For details on topics of interest, proposal format, submission instructions, and additional conference information please visit https://icad2019.icad.org/call-for-participation/
Papers Chair: Tony Stockman icad2019papers@icad.org
Conference Chairs: Paul Vickers and Matti Gröhn icad2019chairs@icad.org
About ICAD: First held in 1992, ICAD is a highly interdisciplinary conference with relevance to researchers, practitioners, artists, and graduate students working with sound to convey and explore information. The conference is unique in its specific focus on auditory displays and the range of interdisciplinary issues related to their use. Like its predecessors, ICAD 2019 will be a single-track conference, open to all, with no membership or affiliation requirements.
]]>It is a pleasure to announce ICAD 2019, the 25th International Conference on Auditory Display. The conference is hosted by the Department of Computer and Information Sciences, Northumbria University and will take place in Newcastle upon Tyne, UK on 23-27 June, 2019. The graduate student Think Tank (doctoral consortium) will be on Sunday, June 23, before the main conference.
Digital technology and artificial intelligence are becoming embedded in the objects all around us, from consumer products to the built environment. Everyday life happens where People, Technology, and Place intersect. Our activities and movements are increasingly sensed, digitised and tracked. Of course, the data generated by modern life is a hugely important resource not just for companies who use it for commercial purposes, but it can also be harnessed for the benefit of the individuals it concerns.
Sonification research that has hit the news headlines in recent times has often been related to big science done at large publicly funded labs with little impact on the day-to-day lives of people. At ICAD 2019 we want to explore how auditory display technologies and techniques may be used to enhance our everyday lives. From giving people access to what’s going on inside their own bodies, to the human concerns of living in a modern networked and technological city, the range of opportunities for auditory display is wide. The ICAD 2019 committee is seeking papers, extended abstracts, multimedia, concert pieces, demos, installations, workshops, and tutorials that will contribute to knowledge of how sonification can support everyday life.
ICAD is a highly interdisciplinary academic conference with relevance to researchers, practitioners, musicians, and students interested in the design of sounds to support tasks, improve performance, guide decisions, augment awareness, and enhance experiences. It is unique in its singular focus on auditory displays and the array of perception, technology, and application areas that this encompasses. Like its predecessors, ICAD 2019 will be a single-track conference, open to all, with no membership or affiliation requirements.
A full Call for Participation with details of the submission classes and dates will be posted on the conference website soon.
]]>Today’s computer networks are under increasing threat from malicious activity. Botnets (networks of remotely controlled computers, or “bots”) operate in such a way that their activity superficially resembles normal network traffic which makes their behavior hard to detect by current intrusion detection systems (IDS). Therefore, new monitoring techniques are needed to enable network operators to detect botnet activity quickly and in real time. Here, we show a sonification technique using the SoNSTAR system that maps characteristics of network traffic to a real-time soundscape enabling an operator to hear and detect botnet activity.
A case study demonstrated how using traffic log files alongside the interactive SoNSTAR system enabled the identification of new traffic patterns characteristic of botnet behavior and subsequently the effective targeting and real-time detection of botnet activity by a human operator. An experiment using the 11.39 GiB ISOT botnet data set, containing labeled botnet traffic data, compared the SoNSTAR system with three leading machine learning-based traffic classifiers in a botnet activity detection test. SoNSTAR demonstrated greater accuracy (99.92%), precision (97.1%), and recall (99.5%) and much lower false positive rates (0.007%) than the other techniques. The knowledge generated about characteristic botnet behaviors could be used in the development of future IDSs.
You can read the whole story in our IEEE Access article.
]]>As part of his PhD research Mohamed Debashi has built the SoNSTAR (Sonification of Networks for SiTuational AwaReness) tool. SoNSTAR is a real-time sonification system for monitoring computer networks to support network administrators’ situational awareness. SoNSTAR provides an auditory representation of all the TCP/IP traffic within a network based on the different traffic flows between between network hosts. A user study showed that SoNSTAR raises situational awareness levels by enabling operators to understand network behaviour and with the benefit of lower workload demands (as measured by the NASA TLX method) than visual techniques. SoNSTAR identifies network traffic features by inspecting the status flags of TCP/IP packet headers. Combinations of these features define particular traffic events which are mapped to recorded sounds to generate a soundscape that represents the real-time status of the network traffic environment. The sequence, timing, and loudness of the different sounds allow the network to be monitored and anomalous behaviour to be detected without the need to continuously watch a monitor screen.
You can read the whole story in our PLoS One article
]]>A pre-print of the article is available on arXiv. It is part of the SoNSTAR project.