Bilkent University Logo
|
EEE Fair Background

Bilkent EEE Fair 2025

Showcasing Innovation: Where Theory Meets Practice in Electrical and Electronics Engineering.

Date: [Date of Event], [Year]

Time: [Start Time] - [End Time]/span>

Location: [Event Location, e.g., EEE Building Foyer]

About the Fair

A Tradition of Excellence

The Bilkent University Electrical and Electronics Engineering Project Fair (EEE Fair) is an annual cornerstone event celebrating the culmination of our students' hard work and ingenuity. It provides a platform for senior students (EEE493/EEE494) and potentially others to present their capstone design projects to faculty, peers, industry representatives, and the wider community.

Discover innovative solutions, witness practical applications of complex theories, and engage with the next generation of engineers. The fair fosters connections between academia and industry, highlighting the cutting-edge research and development happening within our department.

Students presenting at the fair

Meet the Team

Course Coordinator Photo

[Coordinator Name]

Course Coordinator

Head TA Photo

[Head TA Name]

Head Teaching Assistant

Second Head TA Photo

[Second Head TA Name]

Second Head Teaching Assistant

Teaching Assistants

TA Photo

[TA Name 1]

TA Photo

[TA Name 2]

TA Photo

[TA Name 3]

TA Photo

[TA Name 4]

TA Photo

[TA Name 5]

TA Photo

[TA Name 6]

Featured Projects [Year]

Project 1 Thumbnail
Infinia

PHYSARUM

Emre Atmaca, Doğa Demirboğa, Boran Kılıç, Defne Yaz Kılıç, Gökhan Kocaoğlu, Mehmet Emre Uncu

In the mining industry, communication constraints and safety risks pose a serious threat to workers’ lives, particularly during emergencies.

Learn More →
Project 2 Thumbnail
Elekon

detecTHER

Mehmet Bayık, Ahmet Hakan Budak, Nazlı Demirel, Mustafa Enes Erdem, Alper Etyemez, Alper Etyemez, Başbuğ Türkmen Gergin

In response to the hospitality industry's demand for efficient energy management, this project seeks to optimize energy consumption in the scope of the working area by accurately detecting occupancy and controlling energy systems only when necessary.

Learn More →
Project 3 Thumbnail
Tubitak Bilgem İltaren

DELTIS

Zeynep Akçil, Zeynep Yaren Baştuğ<, Hakan Kara, Arda Kosşay, Emre Özba, Alper Özdemir.

Thermal imaging plays a critical role in electronic warfare and defense systems, especially for heat detection and night vision applications.

Learn More →
Project 4 Thumbnail
Havelsan

DynaRL

Berkay Altıntaş, Berkehan Ercan, Ahmet Arda Kocabağ, Muhammet Bahadır Mutlu, Emirhan Tekez

This project focuses on the development of an autonomous unmanned aerial vehicle (UAV) system capable of navigating dynamic indoor environments using a combination of real-time perception and intelligent decision-making. The proposed solution integrates Simultaneous Localization and Mapping (SLAM), object detection, and reinforcement learning (RL) to enable safe and efficient path planning in environments populated with static and dynamic obstacles. The UAV is equipped with a 3D LIDAR, depth camera, and inertial measurement unit (IMU), which provide environmental and motion data for localization and obstacle avoidance. The SLAM module constructs a 3D map of the environment, while the object detection module—based on a YOLO architecture trained on the MS-COCO dataset—detects dynamic obstacles such as moving people in real time. The RL model operates in a continuous action space, taking LIDAR and dynamic obstacle pose estimates as input to generate local velocity commands that guide the UAV toward its target location. For increased training efficiency, the RL model was trained in the NVIDIA Isaac Sim simulation environment with 700 parallel agents. Once trained, the policy was deployed in the Gazebo simulation environment for testing. The UAV demonstrated successful autonomous navigation, achieving an 87% collision-free success rate in indoor environments. All functional requirements were fulfilled, including real-time obstacle detection, trajectory updates at 0.5-second intervals, and accurate SLAM outputs. Non-functional requirements such as size constraints, indoor operation, and user interface design were also met, with minor exceptions in the safety protocols related to lighting and IMU failure.This project establishes a solid foundation for transitioning toward real-world UAV deployment. The modularity and adaptability of the system make it suitable for applications in defense, search and rescue, infrastructure inspection. Future work may involve extending the system to multi-agent scenarios and deploying it on embedded hardware for real-time use.

Learn More →
Project 5 Thumbnail
Databoss

Talk2Fly

Selin Ataş, Kutay Kaplan, Öykü Özbirinci, Efe Özdilek, Ömer Tuğrul, Muhammed Serkan Yıldırım

This project presents the design and implementation of an autonomous drone control system powered by Large Language Models (LLMs) and real time speech recognition, aimed at enhancing the usability and accessibility of UAV operations via natural voice interfaces. The system interprets Turkish voice commands and converts them into executable mission plans for drones operating in both simulated and real world environments. The architecture is composed of three main components: a mobile speech-to-text interface on Android with Flutter, a locally hosted Gemma 2 LLM pipeline for command classification and mission generation, and a drone control module. Communication across modules is managed via MQTT and JSON output of LLM is published to PX4 via MAVSDK to ensure structured, low latency message exchange. The system enables dynamic mission updates, emergency overrides, and telemetry feedback. Experimental evaluation demonstrates a command input latency of under five seconds and a 90% command classification accuracy. The final deployment supports full drone integration and lays the groundwork for potential improvements with visual recognition features. Designed for portability, edge deployment, and multilingual applicability, the system is suitable for use in domains such as defense, disaster response, and field inspection.

Learn More →
Project 6 Thumbnail
Vestel

DriveSafe

Ertuğ Kayra Alemdar, Ömer Burak Avcıoğlu, Orkun Bayri, Yusuf Berkan Gökçe, Emirhan Yağcıoğlu, Selin Yurttaş

In recent years, the rising number of traffic accidents caused by driver distraction and fatigue has emphasized the need for in-vehicle monitoring systems to enhance road safety. DriveSafe is a cutting-edge Driver Monitoring System (DMS) developed in collaboration with Vestel, designed to identify and react to high-risk driver behaviors such as sleep, drowsiness, prolonged distraction, and unresponsiveness. The system harnesses the power of a single-camera setup integrated with advanced deep-learning models that run on an NVIDIA Jetson TX2 platform. By analyzing facial and eye movements in real-time, DriveSafe accurately detects early signs of fatigue and driver distraction—achieving over 90% accuracy—and responds instantly to help prevent potential hazards. The DriveSafe system incorporates a suite of specialized machine learning and computer vision models, including those for face detection, eye detection eye state classification, and eye tracking. The models are developed and optimized to work under a variety of environmental conditions, including varying lighting and weather scenarios, as well as diverse driver characteristics. When the system identifies risky behavior, it initiates prompt visual and auditory warnings via an independent alert mechanism, ensuring the driver is immediately informed of critical conditions. Built in accordance with Euro NCAP standards, DriveSafe is designed to be both cost-effective and easily integrated into existing vehicle architectures. Its modular design not only enhances reliability and performance but also facilitates future integration into original equipment manufacturer (OEM) systems. Additionally, the system’s scalable architecture paves the way for broader applications, including comprehensive passenger monitoring and potential extensions to support autonomous driving features. Precise testing and iterative improvements guarantee that DriveSafe consistently delivers high performance and robust safety, making it an essential tool for improving active vehicle safety and driver attentiveness.

Learn More →
Project 7 Thumbnail
Tübitak Bilgem İltaren

UAVDES

Mahmut Semih Akkoç, Ahmed Bircis Ayfın, Ömer Gölcük, Eren Akbaş, Muhammed Enes İnanç, Hasan Selçuk Kılıç

This project addresses the growing need for portable and efficient unmanned aerial vehicle detection systems (UAVDES) in modern combat scenarios, where drones play a decisive role in operational outcomes. Motivated by insights from recent military conflicts and expert analysis, including data from the Russia-Ukraine war and the Australian Army Research Centre, the solution emphasizes portability, cost-effectiveness, and accuracy. Developed in collaboration with TÜBİTAK İLTAREN, the system integrates AI and non-AI algorithms across three modules—detection, tracking, and physical design—achieving reliable identification and tracking of drones within a 100–500 meter range. It utilizes YOLO V8s for detection, Kalman filtering for tracking, and a dual-field-of-view camera setup mounted on a portable pan-tilt mechanism, all powered by a Jetson AGX Orin and managed by a Raspberry Pi 4. The system demonstrated high detection accuracy under clear weather, effective multi-drone tracking, and robust scanning performance, while offering a compact, deployable alternative to existing fixed-platform solutions.

Learn More →
Project 8 Thumbnail
Meteksan

RDR

Muhammed Enes Adıgüzel, Efe Berk Arpacıoğlu, Mustafa Cankan Balcı, Arda Çınar Demirtaş, Emir Ergin, Ahmet Bera Özbolat

In our final project, we developed a high-performance, real-time system for classifying radio modulation types, demodulating signals, and identifying radio and drone devices. Designed for communication security and signal intelligence applications, the system integrates Automatic Modulation Classification (AMC), signal demodulation, interactive control, and both live and simulated testing. Operating primarily in the 400–527 MHz band using USRP hardware and Hytera DMR radios provided by Meteksan Savunma, the system targets FM, AM, and 4-FSK modulations. A hybrid classification approach combines statistical methods (e.g., Kolmogorov–Smirnov test), machine learning models (XGBoost, Decision Trees), and deep learning (CNNs). The models were trained on both the RadioML dataset and custom-collected signals, tailored to hardware-specific bandwidth constraints. Although hardware limitations restricted live drone testing in the 2.4 GHz and 5.8 GHz bands, offline analysis achieved over 95% accuracy in identifying DJI drone models. The AMC module achieved sub-second latency in real-time trials, while demodulation was successful even under low-SNR conditions, enabling clear voice decoding from short signal bursts. Modular architecture and hardware-level optimizations addressed performance challenges, resulting in a robust and versatile system. The platform provides Meteksan Savunma with advanced capabilities for tactical signal recognition and drone detection, supporting surveillance, emergency response, and defense operations.

Learn More →
Project 9 Thumbnail
Meteksan

DiRT-ANN

Ayşenur Ateş, Öznur Bulca, Alp Dursunoğlu, Okan Eceral, Irmak Ecefitoz, Yiğit Narter

This project aims to design a radar-based detection system that utilizes nonlinear signal processing techniques and artificial neural networks (ANNs) to improve target detection performance in short-range environments. A 60 GHz Frequency Modulated Continuous Wave (FMCW) radar test kit is used to detect specific targets such as humans and drones. The project replaces the conventional CFAR detector typically used in FMCW radar systems with a neural network-based approach. A Fully Convolutional Network (FCN), consisting of feature extraction and classification modules, is employed to perform detection on the Range-Azimuth map generated by the radar. A simulation environment is developed using MATLAB to generate synthetic Range-Azimuth data and train the neural network. Real-world testing is conducted using AWR6843ISK sensor kits provided by METEKSAN to validate the network’s performance. The project is divided into three main work packages: data acquisition and preparation, neural network development and training, and performance evaluation. The test procedure includes simulations, hardware testing, and real-world validation. If time permits, the neural network will also be implemented on an FPGA to enable real-time detection. Upon completion, this project is expected to provide METEKSAN with an advanced, efficient, and accurate radar solution for target detection and classification, with the goal of achieving at least 90% detection accuracy and superior performance compared to traditional CFAR-based methods.

Learn More →
Project 10 Thumbnail
Tübitak Bilgem İltaren

VIS2IR

Kerem Er, Damla Selen Gerdan, Emre Kulkul, Selçuk Vural, Umut Yıldız, Selami Utku Yüce

This project's primary goal is to convert visible images into infrared (IR) images, which have essential uses in fields including remote sensing and surveillance. Maintaining physical consistency between the RGB and IR modalities is difficult because of the significant variances between them. This research uses machine learning and image processing techniques to solve the issue and accomplish precise RGB-to-IR image translation. The research intends to use models that can learn cross-domain mappings by investigating highly referenced papers on image-to-image translation. Using the DAGAN model with segmented inputs and corresponding infrared images is the main solution strategy. Segmentation maps are obtained by using MSEG and a custom dataset that has been created from Coaxials, SMOD, Roadscenes, Camel, MSRS and Kaist. A test set has been created by randomly selecting 300 photos from this specified dataset. Also, to ensure physical consistency, the emissivity information from the HADAR paper has been implemented in DAGAN. To ensure robustness and fidelity in translated images, the performance was evaluated by using PSNR, SSIM, MAE, FID and LPIPS scores as metrics. An interface that receives RGB input and outputs the image's infrared version was designed. The designed model will be assessed using test data. The anticipated results include achieving performance comparable to, and potentially surpassing, existing models in the literature based on evaluation metrics.

Learn More →
Project 11 Thumbnail
Aselsan

DELTARF

Orhan Eren Bıçakçı, İrem Bilgin, Zeynep Maden, Semih Özel, Müge Sarıbekiroğlu, Elif Beray Sarıışık

In this project, a radio-navigation subsystem is developed to estimate delta-position using radio frequency signals without relying on GNSS structure. The system, requested by ASELSAN, addresses the critical need for navigation in GNSS-denied environments such as urban canyons, military zones, and areas vulnerable to jamming. The project is structured into two phases. In Phase 1, a single transmitter-receiver pair of software-defined radios (SDRs) is used to estimate one-dimensional (1D) displacement by analyzing phase differences. Phase 2 aims to estimate two-dimensional (2D) vectorial movement using three transmitters and one receiver. Signal processing algorithms implemented on ADALM-Pluto SDRs rely on Phase-Locked Loop (PLL) for frequency synchronization and Delay-Locked Loop (DLL) for signal separation. MATLAB and GNU Radio are employed for simulation and real-time processing, respectively. Polynomial fitting and hybrid compensation reduce clock drift and offset. The system is deployed on Raspberry Pi with an LCD for live feedback. The final system offers a ±5 cm delta-position accuracy in a compact form, supporting defense and other critical applications.

Learn More →
Project 12 Thumbnail
Roketsan

ALIGN

Gülin Cantürk, Furkan Gürdoğan, Bekir Sami İssisu, Satılmış Furkan Kahraman, Sait Sarper Özaslan, Nurettin Artun Sirgeli

DA sensitive initial alignment solution is essential during flight, since it is used to calculate the launch vehicle’s orientation, location, and velocity. Our project focuses on estimating initial alignment using an Inertial Measurement Unit (IMU) with an additional magnetometer. Our novel idea for the problem is to use a coarse and a fine alignment, where coarse alignment utilizes the Gauss-Newton method and fine alignment utilizes a quaternion-based Extended Kalman Filter. Our aim was to achieve an error of less than 0.1 degrees in all axes with our fusion algorithm.

Learn More →
Project 13 Thumbnail
Beko

REFOOD

Leyla Sude Ateş, İnci Çakır, Muhammed Recep Karadaş, Mustafa Selçuk, Ali Aral Takak, Yağmur Tan

The ReFood Scanner project, developed in collaboration with Arçelik A.Ş., aims to design an innovative system integrated into refrigerators for measuring the ripeness, freshness, and nutritional value of food, with a primary focus on meat products. This system utilizes gas sensors (e.g., MQ136 and MQ137) to detect spoilage-related gases like hydrogen sulfide and ammonia, as well as a Near-Infrared (NIR) spectrometer for analyzing food composition. Data from these sensors is processed through a feedforward neural network deployed on an STM32F407 microcontroller, which classifies meat as either “Edible” or “Spoiled” with over 85% accuracy. The results are transmitted to the refrigerator motherboard via UART communication and displayed in real time. The project emphasizes low power consumption, high scalability for future integration with other food types, and ease of user interaction. It addresses the global issue of food waste by enabling consumers to monitor food freshness more accurately and make informed consumption decisions. The ReFood Scanner is not only aligned with Arçelik's sustainability goals but also paves the way for the integration of intelligent food quality monitoring systems into everyday appliances.

Learn More →
Project 14 Thumbnail
Karel

EYES

Omar Ahmad Khan Durrani, Mohammad Hussain, Muhammad Ahmar Jamal, Efe Koca, Bahadır Öztürk, Mina Us

In this project, we focus on real-time environmental monitoring in an industrial IoT setting, aiming to improve workplace safety, operational efficiency, and environmental awareness. Existing solutions often address visual monitoring and sensor-based environmental analysis separately. Motivated by this gap, we propose an integrated system that unifies real-time AI-based perception with ambient data collection in a cohesive architecture. Our solution combines ambient sensors, edge devices with AI acceleration capabilities, a central server, 5G communication via the Quectel REDCAP EVB module, and a distributed camera network. Lightweight AI models, such as YOLOv11 and MobileNet, are deployed on edge devices for tasks like human detection and light signal recognition, enabling fast, localized decision-making. More complex inference tasks are offloaded to the central server, ensuring scalability and efficient resource usage. The system monitors environmental variables such as temperature, gas levels, and air quality alongside crowd behavior, using region-of-interest-based analysis for real-time anomaly detection. The integrated modules communicate with minimal latency, achieving 30–35 FPS on edge devices and maintaining stable 5G transmission with low packet loss. By calibrating and aligning each component, we created a robust and practical solution for industrial environments that require fast, reliable, and context-aware monitoring.

Learn More →
Project 15 Thumbnail
Savronik

BOBMARLEY

Ahmet Berkay Uysal, Kadir Kaan Durmaz, Mertcan Salih, Ozan Oğuztüzün, Ömer Faruk Sağlam, Yiğit Terzi.

This project aims to develop a search coil type magnetic antenna and a supporting front-end circuit to sense magnetic fields in the 600–900 Hz frequency range.

Learn More →
Project 16 Thumbnail
Roketsan

MEMSENSE

Burak Alanyalıoğlu, Ege Aybars Bozkurt, Muhammet Melih Çelik, Ayberk Çınar, Yaren Kaya, Oğuzhan Yıldız

Conventional navigation systems rely heavily on Global Positioning System (GPS) signals, which are often unavailable or unreliable in certain environments such as underground mines, indoor facilities, or during natural disasters. In such GPS-denied scenarios, operation per sonnel in those scenarios lack access to accurate positional information, increasing the risk of disorientation, delays, and even life-threatening situations. Existing alternatives are of ten bulky, expensive, or unsuitable for real-time, wearable applications.

Learn More →
Project 17 Thumbnail
Beko

AquaServe

Mert Atakan Ümit, Selçuk Efe Koçkan, Kaan Özkan, Mustafa Buğra Özkan, Efe Yılmaz, Cemil Nalça

With advancing technology, automatic products are rapidly entering the market but still no automatic water/ice dispenser currently exists. The project focuses on designing an innovative automatic water/ice dispenser integrated into refrigerators to enhance user convenience. This product will be an innovative product that is technologically suitable for the needs of the period. This design can be added to refrigerators starting with Beko and put into mass production and is aimed to be usable all over the world. The proposed system combines multiple advanced technologies to automate the filling process. It features glass placement detection using proximity sensors, glass height measurement with IR sensors, water level tracking via Time-of-Flight sensors, and gesture recognition enabled by radar sensors. All outputs that are obtained will be processed in the microcontroller. The microcontroller STM32F407 acts as the central control unit to process data from these sensors and ensure accurate operation. The operation of the system will begin with the proximity sensors detecting the glass placed in the created environment. After detection, IR sensors measure the height of the glass and then, water filling process will be started. The height of the water in the glass will be tracked with the help of the Time-of-Flight sensor. When the glass reaches the desired filling level, the microcontroller will stop the system. The project will be validated through comprehensive simulation and testing in both virtual and real-world environments, followed by integrating and testing the prototype in a refrigerator setup. The project progressed as planned, following the Work Breakdown Structure, timetable, and main steps. The project emphasizes cost-effectiveness and accuracy, leveraging affordable sensor options while delivering high performance. In addition to this, the project aims to design a user-friendly, hands-free dispenser that works with various glass shapes and sizes, meets safety and ergonomic standards, and is ready for global markets.

Learn More →
Project 18 Thumbnail
Xera

AIMED

Ayşe Selin Cin, Ece Göre, İrem İlter, Selin Kasap, Buğra Kerem Özcan, Burak Eren Özcan

This project focuses on developing an intelligent mammography imaging system for accurate breast lesion detection. On the hardware side, a low-noise, power-efficient Charge Sensitive Amplifier (CSA) is designed using TSMC 180nm CMOS technology. Detector characterization is conducted using metrics such as Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) for both flat panel and CMOS-based detectors, allowing performance comparisons. On the software side, two deep learning models are developed. The first model is a Faster R-CNN object detection framework with a ResNet-50 backbone and Feature Pyramid Network (FPN), trained to locate suspicious regions in mammogram images, achieving 90% accuracy. The second model is a ResNet-50 classifier, trained to label the detected lesions as benign or malignant with an accuracy of 76%. Preprocessing steps such as resizing, normalization, and contrast enhancement are applied before training. To ensure accessibility, a user-friendly web interface is also developed, enabling users to upload mammography images of any size and select between detection or classification tasks. The system then returns the results in a clear and interpretable way. This end-to-end design, integrating advanced hardware with AI-assisted analysis, provides a promising solution for improving early breast cancer diagnosis and supporting both clinical workflows and personal health monitoring.

Learn More →

Event Schedule

  • [13:30]

    Fair Opening & Welcome Remarks

    [Location: e.g., Main Foyer]

  • [13:30 - 14:30]

    Project Presentations & Demonstrations (Session 1)

    [Location: Project Booths]

  • [15:30 - 16:00]

    Lunch Break & Networking

    [Location: Designated Area]

  • [16:00 - 16:30]

    Project Presentations & Demonstrations (Session 2) / Judging

    [Location: Project Booths]

  • [16:30]

    Award Ceremony & Closing Remarks

    [Location: e.g., EEE Auditorium]

Our Sponsors

We gratefully acknowledge the generous support of our sponsors who make this event possible and contribute to the success of our students.

Past Projects

Explore the archives of previous EEE Fairs and discover the innovative projects showcased in past years.

Get In Touch & Find Us

Contact Information

For inquiries about the EEE Fair, please contact:

  • [contact-email@bilkent.edu.tr]
  • Bilkent University, Electrical and Electronics Engineering Dept.
    Universiteler Mahallesi, 06800 Çankaya/Ankara, Turkey
    Event Location: [Specific Building/Foyer]

Event Location Map