Finished Theses

- PhD Theses

Sebastian Egger: QoE Modeling for Interactive Internet Applications in the Presence of Delay. FTW/Graz University of Technology, June 2014.

- Bachelor Theses

Kevin Erdmann: A User Interaction Framework for Empirical QoE Testing. March 2015.

Gabriel Kovacs: Long-term Active Measurement and Evaluation of E-mail System Related DNS Entries. April 2015.

Valon Lushaj: Design und Implementierung eines Scientific Dashboards für ein User Testing Framework. Jan. 2016.

Fabian Guschlbauer: Beyond Ping.

Philipp Hiermann: Finished ThesesPerformance Evaluation for Virtualized Servers.

Jasmin Kainer: Entwurf und Implementierung eines modularen Nutzungsstudien-Frameworks für die Durchführung von Laborstudien unterschiedlicher Fachrichtungen.

Emanual Plopu: Vertrauliche Lokalisierung.

Vanessa Tudor: Anwendung des Human-Centered Design Prozesses zur Gestaltung der Webpräsenz des Mentoringprogramms der Fakultät für Informatik.

David Zachhuber: Touch Heatmaps für das User Testing von mobilen Apps.

Thomas Schmidt: The Salome Experience - Live Opera Streaming and Beyond - Localization Aspects and Framework Implementation.

Bernhard Schatzl: The Salome Experience - Live Opera Streaming and Beyond - Implementation and Validation.

Sebastian Egger

QoE Modeling for Interactive Internet Applications in the Presence of Delay

Task: ...

Abstract: Quality-of-experience (QoE) of interactive applications transmitted over TCP/IP networks has recently gained considerable attention, and is mainly influenced by transmission delays due to TCP/IP’s retransmission characteristic. This thesis shows that interactive Internet applications share the commonality of a recurring requestresponse cycle that is highly vulnerable to such transmission delays. In the context of two prototypical services, interactive Internet telephony and browser based applications, the impact of transmission delays on QoE is analysed. In terms of interactive Internet telephony, a surface structure analysis of delay impaired voice calls reveals several changes in conversation behaviour caused by the delay. From this analysis, two conversational metrics are derived that capture the influence of delay on human-to-human conversations. Using these metrics as additional input parameters, an update to the E-Model is proposed that enhances prediction performance considerably. For browser based applications, a novel subjective testing methodology is presented that establishes a realistic flow-experience in the resulting web browsing sessions. Data from two lab studies and a field trial proves the ability of this test methodology to provide reliable and consistent results across different contexts. In terms of the relationship between waiting time and QoE for browser based applications, this thesis postulates the WQL hypothesis: the relationship between ”Waiting time and resulting QoE is Logarithmic”. With the acquired data from the three studies, the WQL is verified for file downloads and simple web browsing. Contrary, in the context of complex web browsing the WQL has to be rejected. A following analysis reveals several challenges and practical issues that complicate the use of the WQL for this service. Additionally, it identifies the subjectively perceived page-load-time as an interaction based measure of waiting time and promising input parameter for novel QoE models. Finally, a human perception model, that considers interaction quality performance aspects in the quality formation process, and that explains (re-)actions to (conversational) input signals in the form of active output signals is derived.

Finished: 2014-06

Gabriel Kovacs

Title: Long-term Active Measurement and Evaluation of E-mail System Related DNS Entries.

Task: E-mail has been the first Internet application which had a considerable impact on how people interact in today’s world making it an indispensable technology. At the time is was developed no security measures were built in thus leaving the door open for attackers and making those who use e-mail vulnerable. To counter this issues, DNS based solutions were developed which try to authenticate the sending domains thus reducing the risks. This work will investigate some of these solutions (SPF, DMARC, DKIM, ADSP) and their deployment rate shall be measured utilising the DNS entries of the domains available in the Alexa Top 1 million list for a sufficiently long period.

Abstract: This work presented the fundamentals of some of the technologies (SPF, DKIM, ADSP, DMARC) which have been developed to make e-mail authentication possible thus providing means to mitigate the threats of spam and phishing. It also investigated the adoption rates and configuration of the above mentioned technologies. For this purpose a large scale measurement (based on the Alexa Top 1 million list) has been conducted between April, 2014 and February, 2015.
As the observations of our large-scale measurement illustrated, more than a third of the domains contained in the Alexa Top 1 million list have made use of an SPF record. There are still new domains deploying SPF, while the development has been rather flat during our measurement period. DMARC has registered a very high adoption rate in relative numbers, but in absolute terms is far off the high deployment standards of SPF. Considering SPF has been present since more than nine years (cf. [9]) while DMARC is new and vitally developing, it appears without a doubt that DMARC will establish itself. Companies like Agari and ReturnPath have successfully started since at least 2013 to offer paid services based around DMARC for large e-mail providers, banks and other financial institutions.
DMARC was able to avoid some of SPFs shortcomings by defining a cleaner syntax, not having a “+all” equivalent policy and also a better naming choice for their policies (“soft fail” vs “quarantine”). It is unclear why the DMARC record should be published under a subdomain and not the main domain considering that the SPF record is defined under the main domain and thus a single query would suffice to acquire both records. At the time being there is no free solution for the interpretation of the DMARC feedback. While for a single user or small company this may not be a problem, as they may not require any DMARC analysis results themselves, it does pose a financial burden for big companies. Microsoft ( relies on commercial services such as from Agari and ReturnPath for the feedback evaluation. DMARC is still a relative new technology that will have time to mature and to prove its worth. It is encouraging to see that so much effort is put in increasing e-mail security and at the same time making it less resource dependent.

Finished: 2015-04

show thesis

Lushaj Valon

Design und Implementierung eines Scientific Dashboards für ein User Testing Framework

Abstract: Im heutigen Prozess der Softwareentwicklung spielt Quality of Experience sowie Usability Testing eine große Rolle. Die Nachfrage nach Softwarepaketen, welche dieseBedürfnisse abdecken, wird immer größer. Aus einer vergangenen Bachelor-Arbeit existiert bereits ein Grundgerüst für ein derartiges Softwaresystem.

Das Ziel dieser Arbeit ist es, auf dieser Basis die Funktionalität zu erweitern und den Fokus auf die Implementierung eines des Operatoren-Dashboards zu legen. Das Dashboord wird für die Auswertung der Testergebnisse benötigt, soll es aber auch ermöglichen, die Aktionen und Reaktionen des Probanden live zu verfolgen. In der vorliegenden Arbeit wird beschrieben, wie das User Testing Framework entwickelt wurde. Angefangen von der Anforderungserhebung bis hin zur Evaluierung des kompletten Projekts. Die Evaluierung erfolgte mittels Usability Test.

Finished: 2016-01

show thesis

Fabian Guschlbauer

Beyond Ping

Task: Ping ist ein weit verbreitetes Netzwerk Tool, das aber mittlerweile doch schon in die Jahre gekommen ist. Die neue Version Ping++ soll eine ähnliche Funktionalität aufweisen, und beliebig viele ICMP Pakete an beliebige IP-Adressen senden. Dabei soll die Zeit bis zum Empfang einer ICMP Antwort Nachricht gemessen werden (RTT). Nach Erhalt der Antwort auf das letzte ICMP Request soll eine Statistik über die gemessenen Daten ausgegeben werden.
Eine weitere Funktion von Ping++ soll ein Speedtest zu den Ziel Adressen sein, um Informationen zur maximalen Bandbreite der Verbindung zu erhalten. Für die Bandbreitenmessung soll eine Client-Server Anwendung entwickelt werden
Weiters soll eine Messung von One-Way-Delay ermöglicht werden. Dies ist ebenfalls durch eine Cleint-Server Funktionalität zu lösen.
Zusätzlich wird ein einfaches Grafisches Interface angeboten um auch Nutzern ohne Commandline Kenntnissen die Verwendung zu ermöglichen und allgemein die Verwendung zu erleichtern. Als wissenschaftlicher Teil sollen mit Hilfe des entwickelten Prototypen Usability Tests durchgeführt werden.

Abstract: Ping ist seit Jahrzehnten ein weit verbreitetes und beliebtes Tool um Geräte im Netzwerk zu finden, die Rundlaufzeit von Netzwerkpaketen, und die Verbindungsqualität zu messen. So beliebt und einfach diese altbekannte Anwendung auch ist, gibt es mittlerweile schon viele verschiedene alternative Softwareangebote, welche eine ähnliche Funktionalität bieten. Doch Alternativen mit erweiterter Funktionalität, wie etwa der Bestimmung der Netzwerkbandbreite, Messung von One-Way-Delay oder der Möglichkeit gesamte Subnetze nach aktiven Geräten zu durchforsten sind Mangelware.
Aus diesem Grund haben wir uns entschieden, im Rahmen dieser Arbeit eine Weiterentwicklung der alt bekannten Anwendung Ping zu entwickeln, welche nicht zuletzt auch eine grafische Oberfläche bietet. Im ersten Teil der Arbeit wird auf wichtige Protokolle und die allgemeine Funktionalität der Paketzustellung in paketvermittelten Netzwerken eingegangen, während im zweiten Teil der entwickelte Prototyp ”Pong“ näher erläutert wird.

Finished: 2015-08

Philipp Hiermann

Performance Evaluation for Virtualized Servers

Task: ...

Abstract: Looking at todays server infrastructure, we can see that virtualization becomes more important every day. There are several reasons for this trend, one of those is the desire for efficient usage of physical systems. In order to evaluate, whether if it makes sense to have the servers virtualized on one physical machine or not, an independent metric is necessary. There are three major parameters for this metric. The first one is the time between packets, referred to as Inter-Packet-Time, the second describes the time that is needed to process packets, therefore it is called Processing-Time and the third one measures the variation of the throughput in different timeslices. The concrete measurement takes place on an instrastructure consisting of a server where the virtualized host is running, a measurement host, a host which sends packets and another one receiving those. The point of timestamping can be done either on the virtual host, the source, the destination or the measurement host. This work will raise the question if it makes a major difference where the packets are timestamped. The results show that the differences between hardware- and software-based measurements disappear with bigger timeslices and that hardware-based timestamping is indeed more accurate.

Finished: 20yy-mm