banner CSE

Seminars 2002

 
PhD seminars 2002

 

18 December 2002, 10:00

Public defence of PhD thesis by Robert Feldt, Computer Engineering, Chalmers University of Technology:

Biomimetic Software Engineering Techniques for Dependability

Room: HC2, Hörsalsvägen 14, Göteborg

Abstract:

The powerful information processing capabilities of computers have made them an indispensable part of our modern societies. As we become more reliant on computers and want them to handle more critical and difficult tasks it becomes important that we can depend on the software that controls them. Methods that help ensure software dependability is thus of utmost importance.

While we struggle to keep our software dependable despite its increasing complexity, even the smallest biological system in nature shows features of dependability. This thesis applies ideas from and algorithms modeled after biological systems in the research for and development of dependable software.

Based on a theory of software development focusing on the internal models of the developer and how to support their refinement we present a design for an interactive software development workbench where a biomimetic system searches for test sequences. A prototype of the workbench has been implemented and evaluated in a case study. It showed that the system successfully finds tests that show faults in both the software and its specification. Like biological systems in nature exploits a niche in the environment the biomimetic search system exploits the behavior of the software being developed.

In another study we applied genetic programming to evolve programs for an embedded control system. Although the procedure did not show much potential for use in real fault-tolerant software, the program variants could be used to visualize the difficulty of the problem domain, explore the effects of design decisions and trade off requirements.

Taken together the works in this thesis support the claim that biomimetic algorithms can be used to explore requirements, design and test spaces in early software engineering phases and thus help in building dependable software.

Keywords: biomimetic algorithms, software testing, automated testing, software development workbench, evolutionary computation, genetic programming, dependability, software engineering, design exploration



13 December 2002, 10:00

Public defence of PhD thesis by Vilgot Claesson, Computer Engineering, Chalmers University of Technology:

Efficient and Reliable Communication in Distributed Embedded Systems

Room: HA2, Hörsalsvägen 4, Göteborg

Abstract:

The use of distributed computing elements has grown in the embedded systems arena, consequently the use of shared communication media linking these computers has garnered increasing attention. Two prominent and contemporary media sharing approaches include variations of controlled and contention based media access paradigms, with the time-triggered (TT) and the event-triggered (ET) approaches, respectively, being prominent manifestations of these paradigms. For these mentioned TT and ET approaches, the thrust of this thesis is on investigating and analyzing efficient realizations of such.

A desired attribute in embedded systems, with safety and real-time requirements, is the basic capability to coordinate and synchronize system time and events. This directly relates to the establishment of a predictable communication base, which subsequently becomes the basis to provide for predictable communication at the system level. Focusing on bus-based communication protocols, we present a novel synchronization approach targeting efficiency and low communication overhead as the main drivers for TDMA environments. Existing techniques require explicit transfer of node ID information for synchronization. In this novel approach, the synchronization process utilizes the implicit information in each node's unique message length as node identifier. Furthermore, our initial startup synchronization approach is fault-tolerant, and has a bounded startup time. We also present a re-synchronization strategy that incorporates recovering nodes into synchronization.

The event-triggered and the time-triggered media access paradigms, have spawned discrete followings with much debated pros and cons regarding their relative flexibility, bandwidth efficiency and predictability features. The event-triggered approach is commonly perceived as providing high flexibility. Similarly, the time-triggered approach is expected to provide a higher degree of predictable communication access to the media. One part of this thesis is to objectively and quantitatively assess the capabilities and limitations of each of these paradigms. More importantly, we quantify the spread of their differences, and provide system design guidelines for suggested best usage for each approach. The focus of this work is on response times of the communication system, and the schedulability of the communication system in collaboration with tasks in the nodes. Focusing on efficiency, the second component of this thesis, deals with introducing modifications in the time-triggered approach to efficiently accommodate event-triggered communication using the time-triggered operations as a base.

Keywords: distributed embedded systems, synchronization, media access protocols, time-triggered, event-triggered

 



6 December 2002, 13:00

Public defence of PhD thesis by Pär Moqvist, Transmission Theory, Chalmers University of Technology:

Multiuser Serially Concatenated Continuous Phase Modulation

Room: HC2, Hörsalsvägen 14, Göteborg

Abstract:

A multiuser communication system using serially concatenated and randomly interleaved continuous phase modulation (SCCPM) over the additive white Gaussian noise (AWGN) channel is investigated. The users, which may be asynchronous, are allowed to have individual energy levels as well as carrier frequencies and phases. This model incorporates multiple-access signaling similar to direct-sequence code division multiple-access (DS-CDMA), trellis-coded multiple-access (TCMA), and frequency division multiple-access (FDMA) with arbitrary spectral overlap, as well as non-intentional co-channel or adjacent channel interference of the same signaling type. First, the system is analyzed through analytical upper bounds on the average bit error probability for a given user under maximum-likelihood (ML) detection, where the average is over the ensemble of systems over all sets of interleavers. It is shown that in a properly designed system, the bit error probability vanishes for infinite interleaver sizes and a sufficiently large channel signal-to-noise ratio (SNR), regardless of the signal correlation between the users. Thus, even with equal modulation, energy levels, and carrier frequencies and phases, the users can be detected adequately provided they employ random interleaving. The second part of the analysis concerns iterative decoding of multiuser SCCPM. A convergence analysis based on EXIT charts is presented, along with decoding threshold estimates. It is observed that in systems with no frequency offset, the performance of ML detection does not always carry over to iterative decoding. On the other hand, for many other systems excellent performance can be obtained both in terms of power efficiency (bit error rate as a function of SNR) and spectral efficiency (bandwidth). In particular, systems are demonstrated with performance within 1 dB of the average-power limited Shannon capacity at 1 bit/s/Hz, and within 2.3 dB at 2 bits/s/Hz.


November 8, 2002, 10.15

Public defence of PhD thesis by Ana Bove, Computing Science, Chalmers University of Technology:

General Recursion in Type Theory

Room: Hörsalen, Matematiskt centrum, Eklandagatan 86, Göteborg

Abstract:

This thesis deals with the use of constructive type theory as a programming language. In particular, it presents a method to translate general recursive functional programs into their type-theoretic equivalents.

A key notion in functional programming is recursion, which allows that the object being defined refers to itself. Associated with recursion is the problem of termination since, in general, there is no guarantee that a recursive function will always terminate. It is hence very important to have mechanisms to prove that a certain function terminates or even to ensure the termination of the function by the form of its definition. Examples of the latter are structurally recursive definitions, where every recursive call is performed on a structurally smaller argument.

Standard functional programming languages impose no restrictions on recursive programs and thus, they allow the definition of any general recursive function. There exist, though, less conventional functional programming languages where only terminating recursive definitions are allowed. Constructive type theory can be seen as one of them. In addition, logic can also be represented in constructive type theory by identifying propositions with types and proofs with programs of the corresponding type. Therefore, a type can encode a complete specification, requiring also logical properties from an algorithm. As a consequence, algorithms are correct by construction or can be proved correct by using the expressive power of constructive type theory. This is clearly an advantage of constructive type theory over standard programming languages. A computational limitation of type theory is that, to keep the logic consistent and type-checking decidable, only structurally recursive definitions are allowed.

Many important algorithms are general recursive. Although many such algorithms can be proved to terminate, there is no syntactic condition that guarantees their termination and thus, general recursive algorithms have no direct translation into type theory.

In this thesis, a method to formalise general recursive algorithms in type theory that separates the computational and logical parts of the definitions is presented. As a consequence, the resulting type-theoretic algorithms are clear, compact and easy to understand. Given a general recursive algorithm, the method consists in defining an inductive special-purpose accessibility predicate that characterises the inputs on which the algorithm terminates. The type-theoretic version of the algorithm can then be defined by structural recursion on the proof that the input values satisfy this predicate. When formalising nested algorithms, the special-purpose accessibility predicate and the type-theoretic version of the algorithm much be defined simultaneously because they depend on each other. Since the method separates the computational part from the logical part of a definition, formalising partial functions becomes also possible.

 



18 October 2002, 10.00

Public defence of PhD thesis by Martin Hiller, Computer Engineering, Chalmers University of Technology:

A Software Profiling Methodology for Design and Assessment of Dependable Software

Room: HC2, Hörsalsvägen 14, Göteborg

Abstract:

The advent of computerized consumer products, such as for example automobiles, mobile systems, etc., has produced a large increase in the need for dependable (or robust) systems. As cost is a relevant issue for such systems, the cost of dependability has to be kept low. Furthermore, as the replication of software is virtually free compared to the replication of hardware, the trend is to implement more and more functions in software. This motivates the search for methodologies for cost efficient design and assessment of dependable software.

An established approach for designing dependable software entails addition of error detection mechanisms (EDM's) and error recovery mechanisms (ERM's). The effectiveness of these mechanisms, however, is achieved only if their composition is matched with their placement in locations where they are actually effective. It is the development of a systematic methodology to profile software in order to compose and locate EDM's and ERM's, that this thesis endeavors to achieve.

Presented in this thesis is a set of approaches for profiling software such that the most vulnerable and/or critical modules and signals can be identified in a quantifiable way. The profiling methodology relies on the analysis of error propagation and error effect in modular software. The results obtainable with these profiles indicate where in a given software system, errors tend to propagate and where they tend to cause the most damage as experienced by the environment.

The main contribution of this thesis is a software profiling methodology that encompasses development of the fault injection tool suite PROPANE (Propagation Analysis Environment) and the analysis framework EPIC (Exposure, Permeability, Impact, Criticality---the four main metrics introduced in the framework). The vision is that this contribution can aid software developers in the design and assessment of dependable software in the early stages of development.

 


14 June 2002, 14.15

Public defence of PhD thesis by Thomas Lundqvist, Computer Engineering, Chalmers University of Technology:

A WCET Analysis Method for Pipelined Microprocessors with Cache Memories

Room: HC2, Hörsalsvägen 14, Göteborg

Abstract:
When constructing real-time systems, safe and tight estimations of the worst case execution time (WCET) of programs are needed. To obtain tight estimations, a common approach is to do path and timing analyses. Path analysis is responsible for eliminating infeasible paths in the program and timing analysis is responsible for accurately modeling the timing behavior of programs. The focus of this thesis is on analysis of programs running on high-performance microprocessors employing pipelining and caching.

This thesis presents a new method, referred to as cycle-level symbolic execution, that tightly integrates path and timing analysis. An implementation of the method has been used to estimate the WCET for a suite of programs running on a high-performance processor. The results show that by using an integrated analysis, the overestimation is significantly reduced compared to other methods. The method automatically eliminates infeasible paths and derives path information such as loop bounds, and performs accurate timing analysis for a multiple-issue processor with an instruction and data cache. The thesis also identifies timing anomalies in dynamically scheduled processors. These anomalies can lead to unbounded timing effects when estimating the WCET, which makes it unsafe to use previously presented timing analysis methods. To handle these unbounded timing effects, two methods are proposed. The first method is based on program modifications and the second method relies on using pessimistic timing models. Both methods make it possible to safely use all previously published timing analysis methods even for architectures where timing anomalies can occur. Finally, the use of data caching is examined. For data caching to be fruitful in real-time systems, data accesses must be predictable when estimating the WCET. Based on a notion of predictable and unpredictable data structures, it is shown how to classify program data structures according to their influence on data cache analysis. For both categories, several examples of frequently used types of data structures are provided. Furthermore, it is shown how to make an efficient data cache analysis even when data structures have an unknown placement in memory. This is important, for example, when analyzing single subroutines of a program.

 



14 February 2002, 13.15

Public defence of PhD thesis by Maarja Kruusmaa, Computer Engineering, Chalmers University of Technology:

Repeated Path Planning for Mobile Robots in Dynamic Environments

Room: Wigforssalen, Halmstad University

Abstract:
This thesis presents a global navigation strategy for repeated traverse of mobile robots in dynamic environments.

The research is motivated by the fact that most real-world applications of mobile robotics imply repeated traverse between predefined target points in large, uncertain real-world domains. The goal of this study is to develop a framework for navigation in large-scale dynamic environments that permits the robot to fulfill its assignment by adapting to use of easily traversable paths.

While mobile robots usually tackle the problem of path planning in uncertain environments by implementing obstacle avoidance routines, in this approach the robot also learns to use routes along which obstacles occur less often. The few alternative approaches that consider the problem of global obstacle avoidance function in well-structured environments using an a priori known topological map. The approach reported in this thesis uses a grid-based map for path planning. It is argued that a grid-based map together with a stochastic wavefront planner offers a greater number of alternatives to the path planning problem and that the robot adapts to be able to use the best routes even when very little a priori knowledge of the environment is available.

To adapt to dynamic environments the robot uses case-based reasoning to remember the paths followed and to reason about their traversability. While case-based reasoning is usually applied for planning problems in static environments, this approach also works in dynamic environments and uses a unique similarity measure to reduce the solution space.

This thesis also offers an alternative decision-making strategy for mobile robots in hazardous environments that is based on the concept of irreversible decisions.

The experiments made in simulated and real environments demonstrate that this approach to global navigation permits the robot to increase the predictability of its behavior and to minimize the risk of collision and time delays.

Key words: mobile robot, navigation, case-based reasoning, path planning, dynamic environments, decision-making

 

Licentiate seminars 2002

 

19 December 2002, 13:15

Licenciate seminar by Carl Bergenhem, Computer Engineering, Chalmers University of Technology

Protocols with Heterogeneous Real-Time Services for High-Performance Embedded Networks


Room: Theatre M104 (at the library), Kristian IV:s väg, Halmstad University


Abstract
:

Network protocols for applications that demand high performance and heterogeneous real-time services are presented. These protocols control the medium access to the network and offer additional features to the user, both different user services for traffic and services for parallel and distributed real-time processing. The network architecture assumed is a unidirectional pipelined optical ring.

Radar Signal Processing (RSP) is a typical application area. Such a system contains many computation nodes that are interconnected in order to co-operate and thereby achieve higher performance. The computing performance of the whole system is greatly affected by the choice of network. Computing nodes in a parallel computer for RSP should be tightly coupled, i.e., communications cost (e.g. latency) between nodes should be small, so that the whole system can be perceived as a single unit. This is possible if a suitable network with an efficient protocol is used.

There is an industrial need for new high-performance networks with support for the, often heterogeneous, real-time requirements found in (often embedded) applications such as RSP and other areas such as multimedia. The traffic this kind of network can be classified according to its requirements. The proposed protocols partition the traffic into three classes with distinctly different qualities. These classes are traffic with hard real-time demands, such as mission critical commands, traffic with soft real-time demands, such as process data (a deadline miss here only leads to decreased performance) and, finally, traffic with no real-time constraints at all. The contributions of the present thesis are protocols that integrate heterogeneous real-time services for the three traffic classes.

The performance of the proposed protocols is evaluated through simulations and analysis. It is shown that the protocol is an efficient choice for RSP systems. A brief survey of related technologies is included in the thesis. These are studied from the perspectives of application, architecture and user service.

Keywords: Optical, Ring, Pipeline, Distributed, Parallel-Processing, Real-time, SAN (System Area Network), Heterogeneous, Service


18 December 2002, 14:15

Licenciate seminar by Vishaka Nanayakkara, Computer Engineering, Chalmers University of Technology

Analysis of Congestion in Computer Networks

Room: Sal EL41, Linsen, Hörsalsvägen 11, Göteborg


Abstract:


In this thesis, we discuss the behaviour of network traffic in networks which are having a bottleneck link with persistent congestion. In order to evaluate and analyse this, we collect network traffic data in two dissimilar networks: a persistently congested network and network with high bandwidth and no persistent congestion. We compare and contrast this data in order to analyse the behaviour due to persistent congestion. We design mathematical and graphical methods to do the comparisons with the data, which could not fit to any 'known' distributions. We devise a model to cater for the 'geographical latency' in TCP flows.

The proliferation of the Internet has rapidly increased the number of computers connected to networks. Not only the number of networked computers but network traffic contents and types also change constantly. All these call for larger network bandwidths and better management of available infrastructure. The increased network traffic leads to congestion when the available bandwidths are not large enough to cater for the requirements. Today, congestion in computer networks is a global problem. Countries and organisations, which cannot afford to have the required amount of bandwidth to satisfy the network demands of their users, are severely hit by the congestion.

In order to get the best utilization out of any resource we need to first understand how it is used. Computer networks are no exception. We need to understand the behaviour of network traffic in order to plan for and implement the necessary infrastructure and to identify the reasons for any problem the users may encounter. Measurement is a basic way of understanding the nature of network traffic and evaluating how network resources are utilised.


Keywords: Network Measurements, Congestion Control, Persistent Congestion, TCP, Bandwidth Latency


November 15, 2002, 10.15

Licentiat seminar by Karol Ostrovsky, Computing Science, Chalmers University of Technology:

Higher Order Broadcasting Systems

Room: MD10, Matematiskt centrum, Eklandagatan 86, Göteborg

Abstract:

Ethernet-style broadcast is a pervasive style of computer communication. In this style, the medium is a single nameless channel. Previous work on modelling such systems proposed CBS. In this dissertation, we propose a fundamentally different calculus called HOBS. Compared to CBS, HOBS 1) is higher order rather than first order, 2) supports dynamic subsystem encapsulation rather than static, and 3) does not require an "underlying language" to be Turing-complete. Moving to a higher order calculus is key to increasing the expressivity of the primitive calculus and alleviating the need for an underlying language. The move, however, raises the need for significantly more machinery to establish the basic properties of the new calculus. This dissertation develops the basic theory for HOBS and presents two example programs that illustrate programming in this language. The key technical underpinning is an adaptation of Howe's method to HOBS to prove that bisimulation is a congruence. From this result, HOBS is shown to embed the lazy lambda-calculus, and partially encode pi-calculus and its broadcasting version b-pi-calculus.

 



November 8, 2002, 15.15

Licentiat seminar by Peter Rundberg, Computer Engineering, Chalmers University of Technology:

Data Dependence Speculation Methods to Expose Thread-Level Parallelism

Room: Sal EL43, Linsen, Hörsalsvägen 11, Göteborg

Abstract:

To reach high performance on modern and future computer systems, parallel execution is essential. Running a program on a parallel computer, such as a multiprocessor or a multithreaded processor, requires the program to be split into sections that each processing element can execute. These sections, or threads have to be created when the programmer designs the program either manually or automatically through parallelization tools. In either case, threads can only execute in parallel if they are data independent. In this thesis I present two methods to help the programmer to identify which program sections are data independent.

The first method involves techniques to aid automatic tools to extract parallelism in sequential programs. These techniques allow a compiler to identify parallel sections without proving that they are data independent. To provide correctness a run-time system monitors memory references so that they execute in the correct order. A completely software implemented run-time system, called a Data Dependence Speculation system, is the first method. With this method it is shown that speedups of up to ten times are achievable on 16 processors for code that compilers cannot parallelize.

The second method helps a programmer to expose parallelism in critical sections. As the concurrency of a parallel program can be limited by overly conservative synchronizations, this method allows the machine to execute past busy locks in a speculative manner. The machine will ensure correctness by enforcing the programmers intended path of execution. The method is compared to competing methods and it is shown that the unique reordering feature of the method provides enhanced performance and fewer misspeculations.

 


21 October 2002, 13.00

Licentiate seminar by Peter Gennemark, Computing Science, Chalmers University of Technology:

A model identification algorithm for cell signalling pathways

Room: MD7, Matematiskt centrum, Eklandagatan 86, Göteborg

Abstract:
The dynamic behaviour of cell signalling pathways is usually studied by differential equation models. In order to build such models we have classified common biochemical reactions into different types that are used as structural building blocks. To compare data from different experiments we have also classified experiments into different categories.

Usually, models are manually inferred from experimental data. As the main result of this thesis we present a model identification algorithm that automatically identifies both the structure and the parameters of a model from experimental data, provided that this data is sufficiently extensive. The algorithm is a carefully designed heuristic algorithm that is efficient for pathways of realistic size.

Presently, artificial, but biologically plausible, models and simulated data from these models have been used to test the algorithm. The algorithm can potentially handle real biological experiments: the number of measurement points can be reduced to acceptable levels and the algorithm can handle noisy data.

As a secondary result of this thesis we present a prototype software tool, where data simulation and model identification are integrated into a single virtual laboratory environment.

 


10 October 2002, 13.15

Licentiate seminar by Josef Svenningsson, Computing Science, Chalmers University of Technology:

Efficient and Accurate Constraint Based Program Analysis, and Aspects of Program Optimization

Room: MD9, Matematiskt centrum, Eklandagatan 86, Göteborg

Abstract:

This dissertation is divided into two parts.

The first part concerns constraint based program analysis. The first paper in this part tackles the problem of efficiently performing context sensitive program analysis. We develop the notion of constrain abstractions and show how they can be used in constructing an efficient algorithm for computing solutions to context sensitive program analyses. As an example of such we give a flow analysis which with flow subtyping, flow polymorphism, and flow polymorphic recursion and show how it can be implemented in (O(n^3).

The second paper in the first part gives a concrete example of a constraint based context sensitive analysis. It is a usage analysis which keeps track of how many times a part of a program is used. This information can be used for optimising lazy functional programs. The analysis is specifically designed to scale in both precision and speed for large programs. The analysis employs the techniques developed in the first paper to achieve those goals.

The second part concerns fusion based program optimisation. In functional languages intermediate data structures are often used as glue to connect separate parts of a program. Many techniques have been proposed to automatically remove intermediate data structures. In the last paper we study the dual of one of the most successful techniques, short cut fusion. We show that the dual solves some longstanding problems in intermediate data structure removal. Specifically it can handle functions taking several arguments of which all are to be removed and functions consuming their input using accumulating parameters. These situations have previously only been handled by very sophisticated methods.

 


21 May 2002, 10.00

Licenciate seminar by Martin Kämpe, Computer Engineering, Chalmers University of Technology:

Prediction Methods for Branch and Cache Management in Computers

Room: EL43, Hörsalsvägen 4, Göteborg

Abstract:

A performance limitation in computers is the gap between the processor and main memory speed. While a cache between the processor and memory reduces the gap, a careful selection of the main memory part reflected in the cache is critical. The block size is a critical parameter for the selection. An increased block size normally reduces the number of accesses to main memory, however it can cause pollution. The replacement algorithm is another critical parameter. Although the classical LRU (Least Recently Used) algorithm may achieve a prediction accuracy around 90% selecting the LRU block, the increased speed gap calls for more accurate algorithms. The selection limitation with a fixed block size and the LRU algorithm are addressed in this thesis. Another performance limitation is caused by branch execution in pipelined multiple-issue processors. As branch outcome is first known deep in the pipeline, early outcome misprediction can nullify the execution of several instructions. Contemporary methods to predict the outcome achieve a prediction accuracy around 90%. The high penalty for mispredicted branches in future processors demands a higher prediction accuracy. The thesis acknowledges this problem.

First in this thesis, a method to predict and adjust the block size dynamically is presented. The basic idea is to predict the amount of contiguous data accessed, based on past behavior, and then load that amount of data upon a cache miss. As the amount of contiguous data varies during program execution, a dynamically adjusted block size in the cache reduces the number of accesses to main memory without introducing a lot of cache pollution. Secondly, I will present a method that identifies and predicts the mistakes performed by the classical LRU replacement algorithm. Associating the mistake with the memory instruction causing the mistake makes it possible to predict and prevent future mistakes caused by the same memory instruction. Hence the efficiency of the replacement algorithm is increased, causing the cache performance to increase.

The third contribution in this thesis is twofold and in the field of branch prediction. First, it presents the benefits of taking an abstract view on prediction by using Fourier analysis rather than time-series based prediction methods proposed in the literature. Secondly, program behavior causing current branch predictors to have approximately 90% accuracy is partly identified by applying Fourier analysis to the branch outcome history. Since a prediction is based on a relationship between past and future behavior, this relationship must be found to make a prediction. I show that Fourier analysis can identify relationships ranging over longer time spans than current branch predictors can harvest and is able to store this information in a very efficient way.

Keywords: Spatial locality, cache performance, cache management, shadow directory, branch prediction, Fourier analysis, computer architecture

 


16 May 2002, 10.00

Licenciate seminar by Mathias Johansson, Computer Engineering, Chalmers University of Technology:

Video Communication over the Internet

Room: EL43, Linsen, ED-huset, Göteborg

Abstract:
The Internet was originally designed as a data communication network, primarily supporting asynchronous applications, like file transfer and electronic mail. Already at an early stage, however, the potentials of using the Internet for synchronous interpersonal communication was realized. Since then numerous applications of real-time multimedia communication systems have been demonstrated.

One of the most compelling types of extraverbal communication to support overa network is that employing the visual modality. However, the difficulties in realizing large-scale video communication systems over the Internet must not be underestimated. The best-effort service model of the Internet, without guarantees on timely delivery of packets, poses significant challenges for the realization of robust, high-quality video communication systems. Nonetheless, the feasibility of packet video communication systems has been successfully demonstrated. A
number of unsolved issues remain, however. Of primary concern is how to make video communication systems scalable to a large number of widely distributed users. Since video communication is potentially very broadband, sensitive to delay, jitter and packet loss, and often multipoint in nature, a scalable transmission architecture is certainly not easy to design. Another fundamental question is how to support video communication in highly heterogeneous and dynamic network and computing environments. Since the Internet is built upon network connections of widely different capacities and the computers connected to the Internet have vastly different characteristics, video applications must be adaptive to diverse and dynamic conditions.

This thesis contributes to the realization of a scalable and adaptive framework for video communication over the Internet by presenting novel algorithms and methods for multicast flow control and layered video coding. Moreover, the scope of Internet video is broadened by presenting a procedure for interconnecting multicast videoconferences with the World Wide Web. Finally, an enrichment of Internet video is proposed by introducing a system for stereoscopic video transmission.

Keywords: Teleconferencing, Internet video, layered multicast, flow control, congestion control, layered video coding, video gateways, stereoscopic video

 


26 April 2002, 13.15

Licenciate seminar by Daniel Eckerbert, Computer Engineering, Chalmers University of Technology

Deep Submicron Issues in RTL Power Estimation

Room: EL43, Linsen, Hörsalsvägen 11, Göteborg

Abstract:
Microelectronics has seen an unsurpassed evolution over the last forty years where integrated circuits, ICs, have developed from containing a few transistors per IC to the designs of today which contain hundreds of millions of transistors. By integrating that many devices on a single IC, it has become vitally important that the power consumption is part of the design process at an early stage. It is no longer feasible to design a system, only to redesign it if it does not perform within its power budget. Therefore, techniques for estimating power consumption are essential in designing modern integrated circuits.

The contributions in this thesis are two-fold: first, the deficiencies of
existing power models due to technology scaling are pointed out and techniques to correct them are presented. Second, the power models presented in the thesis enable new approaches to power estimation techniques for the entire systems where interactions between components are taken into account.

What has made the work in this thesis so successful is the way in which power estimation has been approached. Knowledge in the physical implementation of digital systems has allowed for divining new, physically relevant models.

Keywords: VLSI, CMOS, RTL, Power Estimation, Deep Submicron, Short-Circuit, Leakage

 



16 April 2002, 10.00

Licenciate seminar by Peter Ljunglöf, Computing Science, Göteborg University

Pure Functional Parsing - an Advanced Tutorial

Room: MD6, Matematiskt centrum, Eklandagatan 86, Göteborg

Abstract:
Parsing is the problem of deciding whether a sequence of tokens is recognized by a given grammar, and in that case returning the grammatical structure of the sequence.

This thesis investigates different aspects of the parsing problem from the viewpoint of a functional programmer. It is conceptually divided into two parts, discussing the parsing problem from different perspectives; first as a comprehensive survey of possible implementations of combinator parsers; and second as pure functional implementations of standard context-free parsing algorithms.

The first part of the thesis is a survey of the possible implementations of combinator parsers that have previously been suggested in the litterature, relating their dirrefent usages. A number of previously unknown parser implementations are also given, especially efficient for small and medium-sized natural language applications.

Combinator parsers are not very well suited for large-scale applications, and especially not for large-scale natural language processing. In those cases it is necessary to use other parsing algorithms than The second part of the thesis define elegant and declarative, pure functional versions of some standard parsing algorithms for context-free grammars. The goal has been to implement the algorithms in a way that is close to their intuitive formulations, not sacrificing computational efficiency. The implementations only use simple data structures not relying on a global updateable state, thus opening the way for nice functional implementations.

Finally the thesis implements parser combinators that can collect the grammatical structure in the program, to be able to use any suitable parsing algorithm and not just recursive descent. However, this requires an mildly impure extension of the host language Haskell.


Keywords:
context-free grammars,
parsing algorithms,
parser combinators,
functional programming.

The source code from the thesis is available at
http://www.cs.chalmers.se/~peb/parsing

 


26 March 2002, 10.00

Licenciate seminar by Carlos Gonzalía, Computing Science, Göteborg University

Relation Calculus in Martin-Löf Type Theory

Room: Hörsalen, Matematiskt centrum, Eklandagatan 86, Göteborg

Abstract:
This thesis presents a formalisation, carried out with the help of a proof assistant for Martin-Löf type theory, of a constructive, categorical, calculus of relations. The main goal is the integration of these two different programming logics, so that type theory and relation calculus can benefit from each other for the tasks of program derivation and proving properties of programs. Our formalisation intends to become a toolbox for using relational ideas and methods inside type theory, with the help of the proof assistant Agda/Alfa.

We set up the necessary abstract definitions of a constructive calculus of relations based on categories (LT-allegories), so as to get an analogue of the corresponding classical system (allegories). The category of sets and relations is the concrete version of allegories, and we prove that its analogue (the LT-category of E-sets and E-relations) satisfies the definitions of the several classes of LT-allegories. However, this fails for the allegories involving complements or power constructions.

To overcome this limitation, we restrict the setting to finite E-sets and decidable E-relations, and prove that this restricted version satisfies all the definitions of the different classes of LT-allegory. We then show why this finite allegory is a good setting for relational database theory, and how could we use our formalisation to deal with database concepts in type theory. We also show how our formalisation allows us to express relational programs inside it, paving the way for our integration goal in future work.

Keywords: programming logic, type theory, relational calculus, relational database theory, categories of relations, formalised mathematics, program calculation.

 


6 March 2002, 10.00

Licenciate seminar by Emilie Lundin, Computer Engineering, Chalmers University of Technology

Aspects of Employing Fraud and Intrusion Detection Systems

Room: Sal HA2, Hörsalsvägen 4, Göteborg

Abstract:
Computer systems of today are subject to many attacks and much misuse and it can be anticipated that these problems will increase in the future. One way of protecting the systems is to use better authentication and other types of preventive security mechanisms. These mechanisms do not offer good enough protection in most cases and they should therefore be complemented with monitoring and detection mechanisms.

Intrusion detection and fraud detection are two research areas, which have developed in parallel with little or no interaction. This is somewhat surprising, since it could be argued that, largely, fraud detection is a type of application-specific intrusion detection, where the goal of the attacker is to benefit from a commercial service. Thus, it seems probable that many of the findings achieved for intrusion detection systems (IDS) are applicable to fraud detection systems (FDS) and vice versa. The thesis studies how these areas could be successfully merged and what kind of benefits could be achieved by cross-fertilisation, i.e how we can utilise experience in one area to push the other area forwards.

In fraud detection it is normally easier to get authentic data, upon which system verification and testing could be based. However, in many cases, it may be a problem to get a sufficient amount of data even for FDSs. Thus, we have suggested a method by which large amounts of synthetic test data can be generated, based on the statistical properties of authentic data, which is very beneficial since this makes it possible to get a sufficient amount of data with suitable properties. The method seems to work satisfactorily for fraud detection and should also be applicable to intrusion detection. Looking in the other direction, major parts of the vast experience gathered from intrusion detection can probably be used also in fraud detection. In particular, the problems encountered in user and program profiling is certainly a problem also for fraud detection systems. This has been studied and illustrated for intrusion detection and a number of remedies has been suggested. These could also be used in fraud detection. Finally, a comprehensive taxonomy for the intrusion detection research area has been suggested and relevant research in the sub-areas has been surveyed.

 


6 March 2002, 10.00

Licenciate seminar by Håkan Sundell, Computing Science, Chalmers University of Technology

Applications of Non-Blocking Data Structures to Real-Time Systems

Room: Sal MD6, Matematiskt centrum, Eklandagatan 86, Göteborg

Abstract:
This thesis is a major part of the results within a project called
"Applications of wait/lock-free protocols to real-time systems". This
project is funded by the national Swedish Real-Time Systems research
initiative ARTES (www.artes.uu.se) supported by the Swedish Foundation
for Strategic Research. The project started in March 1999 and has been running in major with one active Ph.D. Student. The aim of the project was to study non-blocking algorithms and apply them to real-time systems. The applications should focus on commercial real-time operating systems, with Enea OSE as the major target.

We started with evaluating wait-free snapshot algorithms suitable for real-time systems. As snapshots are very useful in distributed embedded systems, we focused on experiment models using the CAN-bus.

We have studied how using timing information available in real-time systems can be used to improve and simplify wait-free algorithms for shared data structures. The results from experiments and analytical evaluations show significant improvements and high applicability within real-time systems.

We have been doing studies and implementation work with known lock- and wait-free data structures. Some algorithms have been corrected and refined for implementation. Several of these data structures as well as those presented in this thesis have been put together into a library. The functionality of the library is designed in a way to be easily accessible for non-experts in non-blocking protocols. Experiments on multi-processor platforms show significant improvements.

Keywords: real-time, synchronisation, non-blocking, timing information, snapshot, shared memory, lock-free, wait-free

 


6 March 2002, 14.00

Licenciate seminar by Håkan Kvarnström, Computer Engineering, Chalmers University of Technology

Securing and Evaluating Fraud and Intrusion Detection Systems

Room: Sal HA2, Hörsalsvägen 4, Göteborg

Abstract:
Computer systems have since the beginning of their existence been subject to misuse of various kinds. Security vulnerabilities have efficiently been exploited for intrusions, i.e. gaining unauthorized access to resources and information, thus bypassing existing security measures. Also, for systems that deliver services to consumers, various types of fraud have been committed. A typical example is excessive use of a service without paying for it, something that could be accomplished by by-passing the charging mechanism. Automated fraud and intrusion detection systems can be designed to detect such attempts of misuse and to provide an early-warning-system, which allows a security officer to stop the misuse before serious damage occur.

This thesis discusses a number of issues related to fraud and intrusion detection. Thus, the security implications of distributing intrusion detection systems are discussed extensively. Here, one problem is that the distribution of the security policy may in itself present a security flaw. Therefore, self-protection mechanisms that prevent information leakage from the detection policy is proposed. We also discuss the importance of training and testing of detection systems. In particular this is a method for generating large amounts of synthetic test data based on small amounts of authentic data suggested and evaluated. Such data play an important role for the efficiency and operational quality of the detection system. Finally, an architecture that combines intrusion and fraud detection is proposed with the intention of increasing the reliability and efficiency of the detection.

 


31 January 2002, 15.00

Licenciate seminar by Jonas Jalminger, Computer Engineering, Chalmers University of Technology:

On Improving Data Cache Space Utilization


Room: Sal EL43, Hörsalsvägen 11, Göteborg

Abstract:
An important component affecting performance and energy-efficiency is the memory system. A key mechanism for improving both properties is a cache which is placed between the processor and the memory. Due to both performance and power constraints, caches can not be made arbitrarily large and trade-offs incurred because of this must be carefully considered to find suitable compromises.

The aim of this thesis is to use caches efficiently by improving the utilization of the space devoted to data caches. Two different techniques are proposed where the first technique addresses the issue that increased block sizes usually yield better performancebut is detrimental to the energy usage because of the lower block utilization with larger blocks. The proposal is to use smaller block sizes and compensate for the performance loss using selective software-controlled prefetching. By using a performance-tuning tool, the memory instructions generating the bulk of misses are identified. By inserting prefetch instructions to cover the misses generated by these instructions, it is shown that it is possible to get the performance of larger blocks while keeping the energy usage to that of smaller blocks.

The second technique improves cache space utilization by only allocating cache blocks that will be reused during their lifetime in the cache. As the decision to cache or not is needed ahead of time, the technique employs reuse prediction. Each prediction suggests whether a block should be allocated to the cache or not based on the block's reuse history. The mechanism employed to do the predictions has its roots in two-level branch prediction. It is shown that using this mechanism and a single-entry bypass buffer it is possible to reduce the miss rate by up to 17 % compared to a baseline employing an always allocate strategy.

 

Defences

 

Webmaster: Catharina Jerkbrant
Last modified: 030207