Research and Development Projects

Augmented Reality Development Tool



TÜBİTAK-1003

Project No:116E786
Project Title:Augmented Reality Development Tool
Project Manager:Prof. Dr. Haşmet GÜRÇAY
Principal Investigators: Prof. Dr. Haşmet GÜRÇAY, Yrd. Doç. Dr. Selen PEHLİVAN (TED University) , BİTES Savunma ve Havacılık ve Uzay Teknolojileri Lti Şti.
Co-Investigators:Yrd. Doç. Dr. Ufuk ÇELİKCAN, Yrd. Doç. Dr. Serdar ARITAN, Prpf.Dr. Tolga ÇAPIN (TED University), Şevket Süreyya Caba (BİTES)

[Project Details]

The purpose of this project is to develop a flow-based visual programming tool for Augmented Reality (AR) content & application development. Using this tool, it is aimed to improve, speed-up and ease the AR application development process.

Existing AR application development tools are insufficient in terms of the ease with which one develops an AR application, and there is a need for code generating software. There are a number of applications (e.g. Studierstube and ARToolKit) which have been developed to accelerate the process of AR application creation. However, these tools require the developer to have deep knowledge about the algorithms and understanding of underlying principles. Moreover many of these applications are table top at best and fall short of offering a good performance when used on mobile embedded devices. It is for this reason that there is a need for efficient and simple AR development environments that specifically target embedded systems.

Visual Programming (VP) is not new and has been proposed in 1963. However its recognition came only recently with advances in the field of computer graphics. With VP what usually needs to be hand-programmed by the developer is organized into visual blocks. Developers use these visual blocks in an appropriate sequence and make logical connections to other blocks to develop a functioning application. VP has found use in education (Scratch), multimedia (SynthEdit), games, animation and virtual reality (Unity, Blender), modeling and simulation (Simulink), industrial applications (IBM Infosphere) and other fields.

With AR Development Tool to be developed in this project;
1) AR applications will become usable in unprepared and varying environments using markerless tracking
2) AR applications will be developed quick and fast without any programming knowledge required of the developed
3) AR applications become far more interactive with the user as opposed to existing AR applications

The authenticity of this Project can be outlined as follows:
1) Development of a Visual Programing environment for AR content and application development
2) Substantially improved user – AR application interaction through user tracking and feedback

The AR Development Environment will be developed as a stand-alone & platform-independent application. AR applications will run on embedded AR-enabled glasses. AR Development Environment will be based on open-source libraries, such as OpenCV, OpenSceneGraph and others. Developers will be able to input AR resources, such as 3D CAD models, videos and others. The software will be developed in C/C++ using Qt IDE to develop the Visual Programming Interface (VPI). Using the VPI, developers will be able to use drag-and-drop functionality when developing AR applications.

AR applications developed in within the AR Development Environment will be saved in XML-style machine and human-readable file formats to ease and facilitate content sharing and reuse.



Summarization Approaches Towards Interpreting Big Visual Data



TÜBİTAK-1003

Project No:116E685
Project Title:Summarization Approaches Towards Interpreting Big Visual Data
Project Manager:Yrd. Doç. Dr. İbrahim Aykut ERDEM
Principal Investigators:Yrd. Doç. Dr. İbrahim Aykut ERDEM, Gürkan VURAL (Somera A.Ş.)
Co-Investigators:Prof. Dr. Pınar DUYGULU ŞAHİN, Yrd. Doç. Dr. Mehmet Erkut ERDEM, Yrd. Doç. Dr. Nazlı İKİZLER CİNBİŞ, Umut EROĞUL (Somera A.Ş.)

[Project Details]

Due to the advancements in the Internet and digital imaging technologies and their increasing involvement in everyday lives, there has been an upsurge in the amount and diversity of the visual data that has been uploaded to digital environments such as Internet. However, compared to the approaches that have been proposed for other types of large data, the approaches that have been proposed in the literature for interpreting big visual data are almost non-existent. For this reason, big visual data has been regarded as the “dark matter” of the Internet. In addition, one of the important differences of such big visual data is its multi-modal nature, with the extra accompanying information such as text, location, etc. While this property could be seen as an advantage, additional data are likely to be very noisy and subjective, making the direct matching not feasible. For this reason, additional metadata should be interpreted very carefully. An important challenge is to develop methods that correctly couple visual data and metadata together, and utilize their expressive power in an effective way.

The aim of the proposed project is to interpret big and noisy visual data, which has been recorded in diversified environments with no predefined constraints. To this end, the goal is to develop and apply original data mining methods towards extracting important knowledge and increase the accessibility of such archives. Particularly, we aim to focus on summarization approaches, so that the big visual data is more effectively structured and enriched with additional semantic information. The summarization approaches that make use of the multi-modal nature of the data will focus on three main problems: 1) To learn semantic concepts and spatio-temporal attributes from big visual data; 2) organization of large photograph collections; 3) summarization of videos in large web archives. In all these problems, big visual data and the additional information referred as metadata will be handled together.

Another important goal of the project is to further develop the visual data summarization methods that will be developed during our research activities into marketable products through university-industry collaboration. In a joint effort with the Social Media Analysis company Somera, a study will be carried on automatic knowledge extraction from visual content in social media, and, as partners, Somera will also bring their experience and expertise to collect, store and organize big data in a scalable way. Integrating the above approaches into Somera’s platform opens up the possibility of exploiting visual knowledge automatically extracted from social media to analyze perceptions around brands and products, which will be a first in Turkey for the domestic market. Considering that all of these approaches are in fact language independent, this work may enable even more opportunities to play a leading role not just in the domestic market but also in foreign markets as well.



Enhancing the User Experience of 3D Displayed Virtual Scenes



TÜBİTAK-1001

Project No:116E280
Project Title:Enhancing the User Experience of 3D Displayed Virtual Scenes
Project Coordinator:Asst. Prof. Dr. Ufuk ÇELİKCAN

[Project Details]

3D computer graphics has reached a high level of visual quality and the improvement of 3D graphical image quality continues to be an area of research receiving much attention. Recently, developments in displays with 3D capability and 3D televisions, 3D digital cinema, 3D games and other 3D applications has significantly increased the emphasis on the creation and processing of stereoscopic 3D content. Parallel to these developments, new techniques to improve the perceived quality of 3D scenes are in high demand.

The core issue in 3D content creation is determining the comfort range of the perceivable depth by the user and maximizing this perceivable range within those limits. Recent research has made progress in controlling the perceived depth range in post-production pipeline. However, unlike the improvements that can be executed during offline production, an interactive environment, where the position and rotation of the camera dynamically change based on the user input, calls for scalable stereo camera control systems that can run in real time in order to keep the perceived depth of the user in the comfortable target range.

The foremost example of an interactive setting is a 3D game environment where the stereoscopic output is prone to change very dynamically. Significant difficulties persist in presenting users of these 3D interactive environments with comfortable and realistic 3D experience. Today, eye-strain and headaches are still among the common complaints after extended sessions with 3D games. Accordingly, the expected jump in demand for 3D games has not been realized yet. Herein, the most prominent challenge with the human visual system stands to be applying the principles and limitations of 3D perception adequately for the display of the stereoscopic 3D content.

With this project that is proposed to advance the perceived 3D image quality, to enhance the perception of depth, to increase the visual comfort, and, consequently, to improve the overall user experience in the display of virtual 3D scenes; it is aimed, both in interactive and non-interactive environments, to maximize the perceived depth feeling without causing visual discomfort, to establish the sources of visual discomfort and minimize their effects in the presentation of 3D contents with displays of varying scales, and to detect and prevent the triggering mechanisms that lead to virtual reality sickness.



Measurement, assessment and improvement of software testing maturity for people, teams, services and organizations



TÜBİTAK-1001

Project No:116E063
Project Title:Measurement, assessment and improvement of software testing maturity for people, teams, services and organizations
Project Coordinator:Assoc. Prof. Dr. Vahid GAROUSI

[Project Details]

To develop high-quality software systems, software testing is a critical phase of any software development process. However, software testing is expensive and makes up about half of the development cost of an average software project [1]. According to a recent 2013 study by the Cambridge University [1], the global cost of finding and removing bugs from software rose to $312 billion annually as of 2013.

On the other hand, inadequate software testing is also problematic and leads to major negative consequences and economic impacts. According to a 2002 study by the American National Institute of Standards and Technology (NIST), as of 2002, software bugs (defects) cost the United States economy an estimated $59.5 billion annually and it was estimated that improved testing practices could reduce this cost by $22.5 billion [6].

Software companies utilize different types of test practices and process to build high quality software, e.g., unit and system testing, black- and white-box testing. However unfortunately, testing practices and processes are usually not that efficient and effective most of the times, and test engineers are facing many challenges in this area. To improve effectiveness and efficiency of testing, academic researchers and test engineers have presented various test process maturity and improvement models in the last several decades. According to an initial investigation conducted by the project’s principal investigator and his colleagues, about 186 sources are available in this area. For a busy test engineer, reading, understanding, utilizing and 2/2 applying such a great body of knowledge (e.g., test process maturity and improvement models) in an optimum manner is a difficult task. Furthermore, there are needs for new more practical and light-weight models in this area.

The goal of this project is to assess, evaluate the strength and weakness of the existing test maturity models, to develop two news models based on the needs that we have recently identified in the Turkish and Canadian software industry [2-5], and as a result, to help test engineers and test managers in Turkey and beyond in building high-quality software in a cost effective manner. Based on the above goal, we are targeting the following five sub-goals:
  • Sub-goal 1-Systematic mapping, literature review and synthesis in the field of test maturity and test process improvement
  • Sub-goal 2-Synthesis and integration of existing test maturity and test process improvement models and helping test engineers in applying them
  • Sub-goal 3-Developing two new test maturity models: People Test Maturity Model Integrated (People-TMMI) and Services Test Maturity Model Integrated (Services-TMMI)
  • Sub-goal 4- Conducting ‘exploratory’ case studies in collaboration with companies and the industry to identify industrial needs and challenges in this area
  • Sub-goal 5- Conducting ‘improving’ case studies in collaboration with companies and the industry to improve test maturity and test processes
Additionally, this project has two unique points of strengths: (1) the planned industry-academia collaborations in the scope of the project, and (2) an international research collaboration. As of this writing, two large software companies in Ankara have shown interest to engage in case studies in this project, especially for the sub-goals #4 and 5 above. Such an industry-academia collaboration will support improvement and innovation in industry and will also ensure industrial relevance in academic research. The planned international research collaboration will be with a faculty member named Dr. Michael Felderer who works in the University of Innsbruck in Austria.

In summary, the improvements of test maturity and test process thorough application of the models and approaches to be developed in this project will have a highly positive impact on software quality, software engineering productivity, and reduction of testıng efforts in Turkish software firms. To the best of our knowledge, this project will be the first of its kind in Turkey to this date.



Recognition of Collective Activities via Deep Learning Techniques



TÜBİTAK-1001

Project No:116E102
Project Title:Recognition of Collective Activities via Deep Learning Techniques
Project Coordinator:Asst. Prof. Dr. Nazlı İKİZLER CİNBİŞ

[Project Details]

In the last decade, computer vision research has witnessed a dramatic increase on the research on human actions and activities. This increase is mostly due to the proliferation of cameras in everyday lives and the upsurge of collected image and video data henceforth. The need for automatically evaluating this ever growing data has become a common and important need. In this project, for automatically analysis of such data, we will develop and implement novel approaches for collective activity recognition; and in this respect, we aim to work on developing machine learning approaches with high recognition successes.

Although, sub-domains of human action recognition is becoming progressively active, still, most of the research in this area is directed towards singular human action recognition and detection, where the aim is to recognize and/or localize the action of individual persons in isolation. However, there are many situations where the actions of the individuals are not isolated; they are even interconnected, forming interactions or collective activities. In this context, while human interactions can be categorized as the pairwise interactions between human-human, human-object and human-scene, collective activities are identified as the group activities that involve more than two people and that have a complex structure. In this respect, gathering, walking together, queuing can be given as examples for collective human activities.

In this project, we aim to develop and implement novel machine learning approaches targeting at the problem of recognizing collective activities, which involve more than two people, more likely a group of individuals. To this end, we plan to make use of the recently evolving Deep Learning architectures, which are shown to be quite effective in recognition and image understanding tasks. For the purpose of collective activity recognition, we aim to form novel deep learning architectures that better captures the intrinsic properties of collective activities. More specifically, we will work over designing a multi-stream Convolutional Neural Network, where different streams work on different aspects of the data. Additionally, we will work on adapting the Recurrent Neural Network architectures for solving this problem and in this way, we aim to model the temporal structure of such collective activities more precisely.

In the context of this project, in addition to working on video datasets, we plan to apply the aforementioned deep learning techniques to collective activity recognition in still images. For this purpose, since there is a notable scarcity on the benchmark datasets in this topic, we plan to collect a new still image dataset that extensively covers visual data on collective human activities. This dataset will be made publicly available to the literature in order to facilitate research in this direction.



Reliability-Oriented Design Methods for Application Specific Integrated Circuits



TÜBİTAK-1001

Project No:116E095
Project Title:Reliability-Oriented Design Methods for Application Specific Integrated Circuits
Project Coordinator:Assoc. Prof. Dr. Süleyman TOSUN

[Project Details]

Ever increasing performance demand from computer applications has resulted in shrinking technology sizes of CMOS circuits every 18 months over the past 40 years. Shrinking technology sizes made it possible to increase the number of transistors on chips. As a result, designers are able to embed more components on a single chip than ever. While smaller transistor sizes reduce the cost of chips as a result of having smaller chip area, the increase in circuit densities makes the design process more challenging than before. Each technology generation also introduces new design problems in digital systems. For example, when the technology sizes are reduced, the circuits become more vulnerable to radiation effects; thus, the transient faults increase in the circuits. While the error correcting codes (i.e., Hamming codes) can be used to reduce the effects of transient errors for memory elements; for combinational circuits, double or triple redundancy-based methods are used to determine the errors. However, redundancy-based error detection methods increase the chip area and the cost.

While the reduced technology size makes the circuits more susceptible to transient faults, some energy reduction techniques also negatively affects their reliabilities. For example, when dynamic voltage scaling (DVS) is applied as an energy reduction method, the circuit consumes less energy under lower voltage levels; however, lowering the supply voltage also reduces the reliability of the circuit. When we consider the design of an application with large number of components, tackling all system requirements such as area, performance, energy consumption and reliability may need new systematic design methods. Thus, the design process of application specific integrated circuits (ASICs) must consider all these requirements on higher level of abstraction. High level synthesis (HLS) process aims to integrate all system requirements on higher level of abstraction and remedy the designer from lower level design burdens.

Traditional HLS methods usually consider only area, performance, and energy optimizations and most of the previous work ignore the overall system reliability. Especially, the effect of DVS on reliability is completely ignored by the previous studies when they aim to minimize energy consumption by using DVS. In this work, we aim to develop new HLS methods for ASIC design under area and performance constraints and with low energy consumption and high reliability.



Effective and efficient software test-code engineering

TÜBİTAK-3001
Project Title: Effective and efficient software test-code engineering
Duration: 3 years
Project Coordinator: Assoc. Prof. Vahid Garousi

[Project Details]

To develop high-quality software systems, software testing is a critical phase of any software development process. According to a recent 2013 study by the Cambridge University [1], the global cost of finding and removing bugs from software has risen to $312 billion annually and it makes up half of the development time of the average project. Testing work can be roughly divided into manual and automated testing [2]. In manual testing, a human tester executes the Software Under Test (SUT), compared its actual behavior with the expected one and records the test results (pass or fail). In automated testing, a test engineer uses a test tool (itself a software), such as JUnit and Selenium, to develop (or record) test scenarios, as test code which resemble source code. Test code can then be executed automatically on the SUT as many times as needed. In general, the test automation approach is selected to decrease test efforts and increase test efficiency. Automated software testing and development of test code are now mainstream in the software industry. For instance, in a recent book authored by three Microsoft test engineers, it was reported that "there were more than a million [automated] test cases written for Microsoft Office 2007" [3].

With emergence of large and complex automated test suites for major commercial or open-source software, there is a major need for holistic end-to-end management of test code across its entire lifecycle, from test-code development, to its quality assessment and quality improvement, and co-maintenance of test code with production code. In a recent review paper [4], we referred to all those activities that should be conducted during the entire lifecycle of test code as Software Test-Code Engineering (STCE) and provided a summary of tools and techniques in this area.

The project coordinator has been active in various aspects of the STCE in Canada and, in collaborations with several industrial partners, he has designed and developed several techniques and tools and has presented empirical studies and experience reports to the community. e.g., [4-12].

After moving to Turkey in 2013, in frequent interaction with the Turkish software firms, the project coordinator has observed various challenges the Turkish firms are experiencing in the STCE activities and has identified various open areas to work on. The project's goal is to explore (identify) more systematically the STCE challenges in Turkey, to develop techniques and tools to improve the situation and to transfer and apply them in the Turkish industry. This aspect of the project will be made possible by including in the R&D team an established test engineer/architect from a well-known firm in Ankara in the role of "consultant". The novelty of the project in short is to apply the latest cutting-edge technologies in the area of STCE in Turkey and to apply develop new approaches for the local and national software industry. The STCE method that we plan to develop in this project will help the Turkish firms to build high-quality software in more efficient (economic) manner, as a result, the national economy will benefit from this project.



Unsupervised Joint Learning of Morphology and Syntax in Turkish

TÜBİTAK-3501

Project No: 115E464
Project Title: Unsupervised Joint Learning of Morphology and Syntax in Turkish
Project Coordinator:Assoc. Prof. Dr. Burcu Can Buğlalılar

[Project Details]

Project summary: We are aiming to perform unsupervised joint learning of morphology, PoS tags and dependency parsing in a single framework for Turkish language. The input of the system will be raw text and the output of the system will be morphological analyses, Pos Tags, and the dependency relations of a given input text.



City Security Management System



TUBITAK-KAMAG 1007 Program
Project Title: KGYS Smart Support Softwares Project
Duration: 3 years (1 July 2015 - 30 June 2018)
Project Coordinator: Prof.Dr. Hayri Sever

[Project Details]

Projenin amacı Kent Gu?venlik Yönetim Sistemleri (KGYS) tarafından elde edilen görüntülerde gerçeklesen olayların operatörler tarafından daha verimli ve hızlı incelenebilmesi için bir destek video analiz sisteminin oluşturulması ve oluşturulan sistemin pilot olarak seçilen bir bölgede gerçeklenmesidir.


Project Researchers and Tasks (Computer Vision Lab):
- Assist. Prof.Dr. Aykut Erdem (Person or Appearance and Navigation Detection)
- Assist. Prof.Dr. Nazlı İkizler Cinbiş, Bilgisayar Görü Lab Yöneticisi (Detected Person Search)
- Assist. Prof.Dr. Ahmet Burak Can (People Motion Analyze)
- Assist. Prof.Dr. Ufuk Çelikcan (Field Intrusion)
- Assist. Prof.Dr. M. Erkut Erdem (Person Tracking)
- Prof.Dr. Hayri Sever (Semantic Web Analyze)
- Lately will be announced (2 Engineer and 3 Ph.D Students)



Improving the Energy Efficiency of the Random Access Procedure of the LTE-Advanced Standard for Machine-to-Machine Communications



Tubitak 3001

Project Title: Improving the Energy Efficiency of the Random Access Procedure of the LTE-Advanced Standard for Machine-to-Machine Communications
Project Coordinator:Yrd. Doç. Dr. Mehmet Köseoğlu

[Project Details]





Visual servoing of mobile systems, mapping and implementation on FPGA



TÜBİTAK-ARRS (Slovena)

Project Title: Visual servoing of mobile systems, mapping and implementation on FPGA
Project Coordinator :Prof. Dr. Mehmet Önder Efe

[Project Details]





Would you like to update this apllication? Analyze and detection of self updating malicious mobile apps



TÜBİTAK-1003
Project Title: Would you like to update this apllication? Analyze and detection of self updating malicious mobile apps
Project Coordinator:Assoc. Prof. Dr. Sevil Şen

[Project Details]





Towards A Unified Framework For Finding What Is Interesting In Videos

TÜBİTAK Career Development Program: Award 113E497
Proje Başlığı:Towards A Unified Framework For Finding What Is Interesting In Videos
Proje Suresi: 3 years (04/01/2014-04/01/2017)
Proje Yürütücüsü: Dr. Aykut Erdem
Proje Sayfası: http://vision.cs.hacettepe.edu.tr/113E497.html
[Project Details]

Over the past decade, the developments in digital imaging, along with the advances in Internet technologies, brought a huge increase in the volume of digital videos and make them readily available to anyone. With the help of video sharing websites like Youtube, Vimeo and Flickr, people from different countries can upload and share their videos with millions of others around the world. In addition to personal videos, closed-circuit television (CCTV) cameras, webcams and traffic cameras used all over the world also operate on daily basis and capture millions of hours of digitized videos for surveillance and safety purposes. Due to this rate of increase in the amount of visual data, automatic analysis and extraction of semantic information from videos are clearly becoming more essential than ever.

The main goal of this project is to develop and apply effective computer vision techniques that can automatically detect "what is interesting" in such videos. Here, what is meant by interesting may refer to different notions such as wheThe main goal of this project is to develop and apply effective computer vision techniques that can automatically detect " what is interesting" in such videos. Here, what is meant by interesting may refer to different notions such as where people look in videos, salient objects, interesting motion patterns or moments in videos. All these topics listed above form the subject matter of the proposed project. It is important to note that each notion of interestingness involves different computational problems that need to be solved. This project, unlike the previous work, will investigate all these interrelated concepts within a unified framework and this will allow us to detect different levels of interestingness in a more accurate way. Although the aforementioned problems are closely related to each other, most of the existing literature treats them separately - as mentioned above. In this project, we bridge this gap by developing various techniques and methodologies for solving each task, which take advantage of simultaneously using additional information from other sources of interestingness. For this purpose, we will explore methods based on adaptive or online strategies that require little or no supervision and which can work with real-time video streams.



Understanding Images and Visualizing Text: Semantic Inference and Retrieval by Integrating Computer Vision and Natural Language Processing




This material is based upon research supported by the Scientific and Technological Research Council of Turkey (TUBITAK) The Support Program for Scientific and Technological Research Projects: Award 113E116 and the European Commission ICT COST IC1037 Action Project Title : Understanding Images and Visualizing Text: Semantic Inference and Retrieval by Integrating Computer Vision and Natural Language Processing
Project Number: 113E156
Project Duration: 3 years (01.09.2013-01.09.2016)
Proje Yürütücüsü: Dr. Erkut Erdem
Researchers: Yrd. Doç. Dr. Aykut Erdem, Yrd. Doç. Dr. Nazlı İkizler Cinbiş, Dr. Ruken Çakıcı (METU)
Project Web Page: http://vision.cs.hacettepe.edu.tr/113E116.html
COST Action sayfasi: http://www.cost.eu/domains_actions/ict/Actions/IC1307

[Project Details]

For humans, vision and language play essential roles in perceiving the world and interacting with the other individuals around them. While describing the external world or an image thorough a natural language, humans can give very detailed and vibrant descriptions. This depends on the harmony between the parts of the human brain specialized for visual perception and language processing and the feedback loops within them. As fields both emerged from Artificial Intelligence, Computer Vision seeks to develop models and algorithms for analyzing and interpreting visual data, with the ultimate goal of enabling machines to see the world, while Natural Language Processing concerns with understanding of human language capacity from a computational perspective and practice of building computer systems for natural language processing. Regardless of their common historical roots, these two disciplines are generally considered independent and even disjoint from each other. Despite the achievements in these fields, the vast majority of current approaches could not fully benefit from the multimodal (visual and textual) information. However, visual and textual data appear together in many different forms such as webpages which include both images and text, tagged images, photos with captions, subtitled videos, etc., and the amount of such data is growing rapidly.

This project will explore the connection between vision and language from different directions in which we will integrate computer vision and natural language processing methods. By using these two fields together, automatic systems that transcribe the visual content of images with vivid descriptions which are very alike to human language will be obtained. Similarly, in the context of this project, retrieval systems that describe the sentence or paragraph-based textual queries visually via related images or image sets will be constructed.



İş Süreçleri Olgunluğu İçin Bir Öz-degerlendirme Yaklaşımı Geliştirilmesi

TÜBİTAK BİDEB 2219 - Yurt Dışı Doktora Sonrası Araştırma Burs Programı
Proje Başlığı: İş Süreçleri Olgunluğu İçin Bir Öz-degerlendirme Yaklaşımı Geliştirilmesi
Proje Suresi: 12 ay (01.09.2013-31.08.2014)
Proje Yürütücüsü: Yrd. Doç. Dr. Ayça TARHAN
Proje Danışmanı: Yrd. Doç. Dr. Oktay TÜRETKEN (Eindhoven Teknoloji Üniversitesi, Hollanda)

[Project Details]

İş süreçleri bir kurumun iş hedeflerine ulaşmasında ve diğer kurumlarla rekabet edebilmesinde kritik öneme sahiptir ve günümüzde birçok kurum, iş süreçlerinin kaliteli ürün ve servis sunmada ne denli önemli olduğunun farkına varmaktadır. Ne var ki bir kurumun, varlığını idame ettirmeyi sağlayan iş süreçlerini yönetmesi pek de kolay değildir. Bunun en temel sebebi, İş Süreçleri Yönetimi tanımı altında; İş Süreçleri Mühendisliği, Süreç Yeniliği, İş Süreci Modelleme ve İş Süreci Otomasyonu/İş Akış Yönetimi gibi birçok amaç ve yöntemin yer bulmuş olmasıdır.

İş süreçleri yönetiminin bahsedilen tümleşik doğası ve giderek artan önemi, kurumların başarım yetkinliğini sorgulamayı gündeme getirmiştir. "Olgunluk" kavramı diğer yönetim disiplinlerinde, "tam ve hazır olma durumunu, büyüme veya geliştirme için mükemmelliği" değerlendirmeye araç olarak ortaya çıkmıştır. Olgunluk değerlendirme diğer disiplinlerde, olgunluk modeliyle uyumlu bir değerlendirme yöntemi kullanılarak yapılmaktadır. İş süreci olgunluk modellerinin yalnız betimleyici özellikler içermesi ve kendileriyle uyumlu, kurumların kendileri tarafından da kolaylıkla uygulanabilir ölçme ve değerlendirme modelleri önermemesi, bu modellerin benimsenerek yaygınlaşmasına engel teşkil etmektedir.

Bu projede iş süreçleri olgunluğu için bir öz-değerlendirme yaklaşımı geliştirmek amaçlanmaktadır. Bu amaca hizmet etmek üzere proje kapsamında aşağıdakiler gerçekleştirilecektir:

  • İş süreçleri yönetimi alanında yaşanan problemleri ve mevcut iş süreci olgunluk modellerini gözeterek öz-değerlendirme yaklaşımının kapsamını belirlemek,
  • Mevcut değerlendirme modellerinin özelliklerini ve alana özgü zorluklarını gözeterek iş süreci değerlendirme yaklaşımını tanımlamak; bu yaklaşımı destekleyecek mühendislik mekanizmalarını, yöntemleri ve kılavuzluğu oluşturmak,
  • Tanımlanan değerlendirme yaklaşımının, kurumların kendileri tarafından kolaylıkla uygulanabilmesini destekleyecek bir araç geliştirmek.



Yazılım Organizasyonlarında Sürdürülebilir ve Maliyet Etkin Yazılım Ölçme Programları Kurulumu...

TÜBİTAK BİDEB 2232 - Doktora Sonrası Geri Dönüş Burs Programı Projesi
Proje Başlığı: Yazılım Organizasyonlarında Sürdürülebilir ve Maliyet Etkin Yazılım Ölçme Programları Kurulumu için Bütünsel bir Yaklaşım Geliştirilmesi
Proje Suresi: 24 ay (01.05.2013-30.04.2015)
Proje Yürütücüsü: Yrd. Doç. Dr. Çiğdem GENCEL
Proje Danışmanı: Yrd. Doç. Dr. Ayça TARHAN

[Project Details]

Yazılım kurumları, yazılım süreçlerini, ürünlerini ve projelerini daha iyi yönetebilmek ve bilgiye dayalı karar alabilmek için ölçme sürecinin ne denli önemli olduğunun uzunca bir süredir farkındadırlar. Yazılım ölçme süreci yazılım süreç iyileştirme için kendi basına itici bir güç olarak görülmektedir. Yazılım ölçme sürecinin aynı zamanda yazılım kurumu ve müşterileri arasındaki iletişimi daha etkin kıldığı da düşünülmektedir.

Ancak, ne yazık ki pek çok yazılım kurumu sürdürülebilir ve etkin ölçme programları kurmakta hala büyük zorluklar yasamaktadırlar. Ölçme programlarının neredeyse %80'inin karar verme sürecinde ve performans iyileştirmede yeterince etkin ve yararlı olamadıkları için kurum paydaşlarının ve çalışanlarının desteklerini yitirdikleri ve sonuçta başarısız oldukları belirtmektedir.

Bu proje yazılım üretim sürecini, yukarıdan aşağıya isleyen yönetim pratikleriyle aşağıdan yukarıya isleyen bilgilendirme pratiklerini bütünsel bir bakış açısıyla ele alarak ölçebilmeyi destekleyecek ve pratikte kullanılabilecek bir yaklaşım ve araç seti geliştirerek kurumlara bu konuda çözümler getirmeyi hedeflemektedir.



Kötücül Yazılımların ve Anti-Kötücül Yazılım Sistemlerinin Eş Evrimi

TUBITAK 3501 Kariyer Programı
Proje Başlığı: Kötücül Yazılımların ve Anti-Kötücül Yazılım Sistemlerinin Eş Evrimi
Proje No: 112E354
Proje Suresi: 26 ay (01.02.2013-01.04.2015)
Proje Yürütücüsü: Dr. Sevil ŞEN
Bursiyer: Arş. Gör. Kazım Sarıkaya

[Project Details]





Bilgisayarlı Görüde Çoklu İpucu ve Bağlamsal Bilgi Kullanımı

TUBITAK 3501 Kariyer Programı
Proje Başlığı: Bilgisayarlı Görüde Çoklu İpucu ve Bağlamsal Bilgi Kullanımı (The Use of Multiple Cues and Contextual Knowledge in Computer Vision)
Proje No: 112E146
Proje Suresi: 3 yıl (01.09.2012-01.09.2015)
Proje Yürütücüsü: Dr. Erkut Erdem
Araştırmacı: Dr. Aykut Erdem

[Project Details]

Görsel ipuçları etrafımızdaki dünya hakkında genellikle belirsiz bilgi sağlarlar. Buna rağmen biz insanlar bu belirsiz ipuçlarından yola çıkarak algıladığımız dünyanın doğru ve kesin yorumlarına ulaşmada çok başarılıyızdır. Bunun temel sebebi, dünyayı algılarken görme sistemimizin farklı türlerden çoklu ipuçlarını uyarlamalı bir şekilde birleştirmesi ve bu esnada çeşitli bağlam bilgilerinden yararlanmasıdır. Görme sistemimiz, alt düzey ipuçlarından gelen bilgiyi üst düzey bağlamsal bilgi ile birleştirirken güvenilir ipuçlarına daha fazla önem tahsis etmekte, daha az güvenilir veya daha az kullanılabilir ipuçlarına ise daha az ağırlık vermektedir. Bu sayede de kendini çevredeki değişikliklere göre hızlı bir biçimde uyarlayabilmektedir. Yapay görme sistemlerinin başarılarının arttırılması da, bu bakımdan bağlamsal bilgiyi ve çoklu ipuçlarını bu yönde uyarlamalı şekilde birleştiren hesaplamalı yöntemler geliştirmekten geçmektedir.

Bu proje kapsamında çoklu ipucu ve bağlam bilgisinin bilgisayarlı görüye etkileri, sırasıyla, görsel belirginlik hesabi, görüntü filtreleme ve bölütleme gibi birbirleriyle yakından ilintili işlemler üzerinden incelenecek ve geliştirilen yeni yöntemler çesitli bilgisayarlı görü uygulamalarında kullanılacaktır. Burada önemli bir nokta, bu yöntemlerin yeri geldiğince birbirlerini destekleyecek şekilde geliştirilecek olmalarıdır.



İnsan Etkileşimlerinin Makine Öğrenmesi Tabanlı Analizi ve Tanınması

TUBITAK 3501 Kariyer Programı
Proje Başlığı: İnsan Etkileşimlerinin Makine Öğrenmesi Tabanlı Analizi ve Tanınması (Learning-based Analysis and Recognition of Human Interactions)
Proje No: 112E149
Proje Suresi: 3 yıl (01.10.2012-01.10.2015)
Proje Yürütücüsü: Yrd. Doç. Dr. Nazlı İkizler Cinbiş

[Project Details]

Bu projenin amacı, geniş görsel veri kümelerinde, insan etkileşim örüntülerini otomatik olarak analiz edip tanıyan bilgisayarlı görü algoritmalarını geliştirmek ve uygulamaktır. İnsan etkileşimleri, bir ya da birden çok kişiyi ve/veya bir ya da daha çok nesneyi içeren kollektif hareketlerdir. İnsan etkileşimleri, insanlarla etkileşimler, nesnelerle etkileşimler, ortamla etkileşimler gibi farklı kategorilerde olabilir. Bu farklı kategoriler, önerilen projenin alt düzey konularını oluşturmaktadır. Bu etkileşim kategorileri ve hedeflenen görsel arşivler ile ilgili farklı tanıma zorlukları bulunmaktadır. Öncelikle, işlenmesi gereken videoların farklı özellikleri olabilir, bunlar a) statik kamera görüntülerinden oluşan ve insanların uzaktan göründüğü takip videoları, b) video çekim şartlarının serbest olduğu ve çözünürlüklerin nispeten düşük olduğu Youtube gibi veri kaynaklarından toplanmış gündelik hayat videoları ve c) farklı bir bakış açısı ile çekilen, ellerin ve çevrenin görünür olduğu ve kameranın sürekli hareketli olduğu egosentrik videolar olabilir. Görsel insan etkileşimlerinin analizi bütün bu farklı çekim koşulları için ayrı ayrı incelenmelidir.

Bu projede, bahsedilen bu zorlukları gözönüne alarak insan etkileşimlerini otomatik olarak analiz edip tanıyan bir sistemin, çeşitli bilgisayarlı görü ve makine öğrenmesi teknikleri kullanılarak geliştirilmesi hedeflenmektedir. Bu çerçevede, probleme ait zorluklara yönelik hem üst düzey, hem de orta düzey görsel öznitelikler tasarlanacak ve probleme uygun çözümler geliştirilecektir.



Manses: Türkçe ses tanıma ve tr2en çeviri bulutu

Bilim Sanayi ve Teknoloji Bakanlığı - SanTez Programı
Proje Başlığı: Mobil Sistemlerde Ses Tanıma İle Türkçe-İngilizce Tercüme Sistemi (English-Turkish Translation Program With Speech Recognition in Mobile Systems)
Proje No: 00815.STZ.2011-1
Proje Süresi: 2 yıl (01/11/2011-01/11/2013)
Proje Yürütücüsü: Prof.Dr. Hayri SEVER

[Project Details]

Manses projesi Santez tarafından kısmen desteklenen ve Hacettepe Üniversitesi ile birlikte gerçekleştirilen bir projedir. Uygulama, mobil sistemlerde, ses tanıma teknolojisi ve bulut bilişim altyapısı kullanarak Türkçe-İngilizce çeviri yapmaya olanak sağlamaktadır. Mobil sistemlerinin hızla yaygınlaşması, bulut bilişimin giderek güncel teknolojilerden biri haline gelmesi ve ses tanıma araçlarının çok çeşitli alanlarda kullanılabilir olması bizleri bu uygulamayı geliştirmeye teşvik etmiştir. Hedefimiz Türkçe tabanlı, yüksek performanslı ve dayanıklı bir ses tanıma ve tercüme sistemi ortaya koymaktır. Türkçe dili üzerinde yapılan ses bilimsel ve dil bilgisel araştırmalarla desteklenerek, ses tanıma sistemlerinin mimarileri ve bu sistemlerin Türkçe dili için uygulanmakta ve özel algoritmalar geliştirilmektedir. Dağıtık hesaplama ve bulut bilişim üzerinde de konuşma analiz uygulaması gerçekleştirilmektedir. Kısa zaman sonra sunulacak uygulama ile Türkçe bilmeyen kişilerin mobil cihazları yardımıyla Türkçe iletişim kurmasına imkan sağlanacaktır. Aynı şekilde Türkçe bilip, diğer dilleri konuşamayan kişiler, mobil cihazları yardımıyla istedikleri dilde iletişim kurabilme şansına sahip olacaklardır. Ürün hakkında detaylı bilgi almak için lütfen proje sayfasını ziyaret ediniz.
Proje Sayfası



Lit2Info - Literatür Tabanlı Bilgi Keşfi Aracı (Literature Based Discovery)

Hacettepe Üniversitesi Bilimsel Araştırmalar Birimi - Araştırma Projesi
Proje Başlığı: Lit2Info - Literatür Tabanlı Bilgi Keşfi Aracı (Literature Based Discovery)
Proje No: 901602008
Proje Süresi: 3 yıl (04.07.2010-04.07.2013)
Proje Yürütücüsü: Prof.Dr. Hayri SEVER

[Project Details]

Literatür Tabanlı Bilgi Keşfi Araçları (LTB), metin tabanlı büyük bilgi kaynakları üzerinde, sahip olunan alan bilgisi ile yeni hipotezler üretmeyi hedeflemektedir. LTB, biyomedikal alanda hazırlanmış bilimsel makaleler üzerinde kavramlar arası ilişkilerin kurulmasını ve kurulan bu ilişkilerin incelenmesi ile yeni hipotezlerin üretilmesini sağlamaktadır.

Proje Tıp ve Bilgisayar Bilimleri arasında güçlü bir innovasyon ihtiyacından doğmuştur. Diğer taraftan, projenin istenilen olgunluğa erişmesi ile beraber üniversitemiz araştırmacılarına sunulması hedeflenmektedir. Özellikle ilaç araştırmalarında üniversitemiz araştırmacılarına farklı bakış açıları önerebilmesi sistemin kullanımına ilişkin ilgi sağlaması beklenmektedir. Üniversitede sınanmış bir sistem ticari alanda araştırma yapan firmalara daha cazip gelecektir.

Hacettepe University Department of Computer Engineering
06800 Beytepe Ankara