Hack2023 2nd Prize

From MECwiki
Revision as of 14:28, 11 December 2023 by Velez (talk | contribs) (Created page with "{{DISPLAYTITLE:<span style="position: absolute; clip: rect(1px 1px 1px 1px); clip: rect(1px, 1px, 1px, 1px);">{{FULLPAGENAME}}</span>}} <p class="center" style="font-size:34px;"><b>1<sup>st</sup> Prize Award</b><p> <p class="center" style="font-size:34px;"><b> Managing natural resources using AI, Edge Computing and Advanced Communication </b><p> <br> = Team= <div class="flex-row row"> <div class="col-xs-5 "> <div class="panel panel-default"> <div class="panel-bo...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


1st Prize Award

Managing natural resources using AI, Edge Computing and Advanced Communication


Team

Team Sheikah Tower from Google LLC and Peking University

  • Qi Tang - Senior Hardware Engineer - Google
  • Yi Han - Google
  • Sharu Jiang - Peking University


Sheikak1.png

Introduction

Edge Native Real-time Voice AI Assistant provides local-info enhanced language and speech services (e.g. real-time translation and AI voice-bot) using the state-of-the-art AI models. The key features are the real-time streaming services by edge computing. We will also leverage the ETSI MEC APIs to fine-tune or prompt the LLM with available local information (such as dialects, geographics, local culture) to provide faster and more useful user contents. We will demonstrate the advantages of edge computing compared to traditional on-device and cloud based services. The end user interfaces can be mobile devices, wearables, IoTs, robots and/or vehicles. Main Features: ● Users can easily find the nearby virtual assistants from a map view by leveraging MEC APIs ● Local AI virtual assistant indexed by ZoneID and CellID ● “Local” means the Vector Database and Prompts are location dependent ● The Vector database and Prompts are uploaded and designed by the local business owners; Like ● The virtual assistants can be sophisticated / the-state-of-art AI models serving as a real-time language interpreter (For example, Meta’s latest Speech-to-Speech Massive Language models) which also can be found by the user from the map (as long as it is within the same ZoneID or CellID) ● The finding range can be also flexible: for example, indoor localization information from MEC APIs which is used serving for a museum exhibit tour (room specific) or a city tour based on the user’s device GPS signal ● The app is agnostic to various user end devices because computation, memory and location information is not on device per se. We choose iOS for demonstration purpose only.


Sheikah-archi.png





Software resources

Project repository

https://github.com/Dako2/sheikah-tower.git


Project Videos