city-artificial-intelligence / ai-uk-fringe-event-2024

AI UK Fringe Event on AI research at City, University of London

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

fringe

AI@City AI UK Fringe Event 2024

Fringe Event to be collocated with the AI UK conference organised by The Alan Turing Institute.

This event aims at showcasing the AI research conducted at City, University of London, a member of the Turing University Network. Attendance to the event is open to everyone. Poster and oral presentations by invitation.

Registration: please register your in-person attendance by filling this form. Deadline, February 16, 2024.

Call for posters: please register your interest in presenting a poster here. Deadline February 16, 2024. Notification of acceptance by February 21, 2024.

Agenda

9:00-09:30   Registration, poster set up and coffee.

9:30-09:45   Welcome to the Fringe event

By Prof. Eduardo Alonso, Prof. Artur d'Avila Garcez and Dr. Ernesto Jimenez-Ruiz.

9:45-10:30   Keynote 1 (30 min + 15 min questions). Session Chair: Prof. Eduardo Alonso

Speaker: Prof. Michael Fisher (University of Manchester)

Title: Is there a path to Trustworthy AI?

Abstract: AI is very popular at present. But when we deploy it, especially in important or even critical situations, do we know what this use of AI will result in? And should we trust it to always work "well"? I will discuss general issues around the development of Trustworthy AI, and the broader governance of AI. These include verification (does it work?), beneficiality (is it working for our benefit?), and the analysis of these both before and after deployment.

Bio: Michael Fisher is a Professor of Computer Science and the Royal Academy of Engineering Chair in Emerging Technologies in the Department of Computer Science at the University of Manchester. His research concerns autonomous systems, particularly verification, software engineering, self-awareness, and trustworthiness, with applications across robotics and autonomous vehicles. Increasingly, his work encompasses not just safety but broader ethical issues such as sustainability and responsibility across these (AI, Autonomous Systems, IoT, Robotics, etc) technologies. Fisher co-chairs the IEEE Technical Committee on the Verification of Autonomous Systems, chairs the BSI Committee on Sustainable Robotics, is a member of the IEEE P7009 Standards committee on Fail-Safe Design of Autonomous Systems, and is a member of the Strategy Group of the UK's Responsible AI programme. He is currently on secondment (2 days per week) to the UK Government's Department for Science, Innovation and Technology, advising on issues around AI and Robotics.

10:30-11:30   Coffee break and Poster session (1h)

11:30-12:30   Short Presentations from the School of Science and Technology (SST) (10min + 5 QA). Session Chair: Dr. Esther Mondragon.

12:30-13:45   Lunch and Networking (1h15min)

13:45-14:30   Keynote 2 (30 min + 15 min questions). Session Chair: Dr. Pranava Madhyastha

Speaker: Prof. Francesca Toni (Imperial College London)

Title: Interactive Explanations for Trustworthy AI.

Abstract: AI has grown massively in the last ten years or so, predominantly due to increased processing power availability, big data and powerful statistical and probabilistic models to support machine learning and reasoning as vector (rather than symbol) manipulation. The resulting AI models tend to be inscrutable black-boxes, whose trustworthiness may be doubtful, given that artifacts and biases may be present in these models. In this talk I will explore the role that interactive explanations, supported by computational argumentation, may have towards the trustworthiness of AI models.

Bio: Francesca Toni is a Professor in Computational Logic and Royal Academy of Engineering/JP Morgan Research Chair in Argumentation-based Interactive Explainable AI, in the Department of Computing at Imperial College London, where she is a member of the Artificial Intelligence research theme and the leader of the CLArg (Computational Logic and Argumentation) research group. She is also a member of the GLAM research group and of the AI@Imperial Network of Excellence at Imperial College London, and the founding leader of the Centre for eXplainable AI (XAI).

14:30-15:30   Short presentation beyond SST (10min + 5 QA). Session Chair: Dr. Ernesto Jimenez-Ruiz.

15:30-16:30   Coffee Break and Poster session (1h)

16:30-17:15   Panel about the Reliability, Safety, and Fairness in AI Systems.

Panel Moderator: Prof. Artur d'Avila Garcez

Panelists: Prof. Francesca Toni, Prof. Robin Bloomfield, Prof. Andrea Baronchelli

17:15-17:30   Closing


Posters:

  • Riad Ibadulla (City): Fat-U-Net: Non-Contracting U-Net for Free-Space Optical Neural Networks
  • Qiqi Su (City): Modular Neural Networks for Time Series Forecasting: Interpretability and Feature Selection using Attention
  • Daniel Sikar (City): The Role of Game Engines in Autonomous System Safety
  • Alex Dean (City): Algebras of actions in an agent's representations of the world
  • Sergio Naval Marimont (City): DISYRE: DIFFUSION-INSPIRED SYNTHETIC RESTORATION FOR UNSUPERVISED ANOMALY DETECTION
  • Nitisha Jain (KCL): Semantic Interpretations of Multimodal Embeddings towards Explainable AI
  • Kimberley Verity (City): New rooted-tree indices
  • Hao Ma (Queen Mary): Rethinking Machine Learning Superiority: The Model-Based Edge in Asset Pricing
  • Sevinj Teymurova (City): Aligning network of ontologies using Graph AI
  • Minhal Mahmood (City): Self-Engineering of Domestic Smart Appliances using AR&VR Technologies
  • Professor Richard Curran (City): Using AI for AI! (or Artificial Inteligence for Aviation Inteligence!.)
  • Constanza Musso (City): AI Advice-Taking in Financial Decision-Making: The Role of Preference on Advice Integration
  • Sandamali Wickramasinghe (City): Verifying AI systems via learning
  • Alex Clay (City): Social Conversational Agents with Semantic Memory through Dynamic Knowledge Graph Embeddings and Recommendation
  • Dmitrii Riabchenko (City): Machine Learning Clifford invariants of ADE Coxeter elements
  • David Tena Cucala (University of Oxford): Correspondences between Graph Neural Networks and Datalog
  • Lorenzo Belenguer (City): AI Bias A suggestion for a framework of actions to detect, mitigate and reduce discriminatory bias