PhD and Licentiate seminars 2005

November 28, 2005,
10:15 am

Room: EB, Hörsalsvägen 11, Göteborg.



Public defence of PhD thesis by Peter Gennemark, Computing Science, Chalmers University of Technology:

Modeling and identification of biological systems with emphasis on osmoregulation in yeast

Faculty opponent: Professor Olaf Wolkenhauer, Institut für Informatik, Universität Rostock, Tyskland.


This thesis deals with two topics in the area of systems biology. The first topic, model identification, concerns the problem of automatically identifying a mathematical model of a biochemical system from experimental data. We present algorithms for model selection and parameter estimation that identify both the structure and the parameters of a differential equation model from experimental data. The algorithms are designed to handle problems of realistic size, where reactions can be non-linear in the parameters and where data can be sparse and noisy. To achieve computational efficiency, parameters are estimated for one equation at a time, giving a fast and accurate parameter estimation algorithm compared to other algorithms in the literature. The model selection is done with an efficient heuristic search algorithm, where the structure is built incrementally. The main strengths of our algorithms are that a complete model, and not only a structure, is identified, and that they are considerably faster compared to other identification algorithms.

The second topic concerns mathematical modeling of osmoregulation in the yeast Saccharomyces cerevisiae. This system involves the biophysical and biochemical responses of a cell when it is exposed to an osmotic shock. We present two different differential equation models based on experimental data of this system. The first model is a detailed model taking into account an extensive amount of molecular detail, while the second is a simple model with less detail. We demonstrate that both models agree well with experimental data on wild-type cells. Moreover, the models predict the behavior of other genetically modified strains and input signals.

Keywords: model identification, model selection, parameter estimation, ordinary differential equations, Saccharomyces cerevisiae, osmotic stress

September 13, 2005,
10:15 am

Room: EE, Rännvägen 6, Chalmers tekniska högskola, Göteborg

Public defence of PhD thesis by Mattias Grönkvist, Computing Science, Chalmers University of Technology:

The Tail Assignment Problem

Faculty opponent: Professor Cynthia Barnhart, Department of Civil & Environmental Engineering, Massachusetts Institute of Technology


The general aircraft assignment problem is the problem of assigning flights to aircraft in such a way that some operational constraints are satisfied, and possibly that some objective function is optimized.

We propose an approach to aircraft assignment which captures all operational constraints, including minimum connection times, airport curfews, maintenance, and preassigned activities. It also allows multiple fleet optimization, and can model various types of objective functions, which sets it apart from most other aircraft assignment approaches. The name we have chosen for our approach is the Tail Assignment problem. The tail assignment problem is general in the sense that it can model the entire aircraft planning process, from fleet assignment to the day of operation.

We develop solution methods based on mathematical programming and constraint programming, and also show how the methods can be combined. The complete hybrid solution method is general in the sense that it can be used both to quickly find initial solutions and to find close to optimal solutions, if more running time is allowed.

We present a mathematical programming approach based on column generation, conduct thorough computational experiments to show the impact of various implementation choices on running time and convergence, and present fixing heuristics to find integer solutions.

We show how constraint programming can be used stand-alone to quickly produce solutions to the tail assignment problem, and to substantially improve the computational performance of the stand-alone column generation algorithm. Preprocessing algorithms based on constraint programming are presented that can reduce the size of the problem substantially more than standard balance-based preprocessing, resulting in major speedups and increased solution quality. Our complete solution approach combines column generation, constraint programming and local search.

Finally, as proof of concept, we present modeling examples and computational results for a selection of real-world tail assignment instances, demonstrating how our model and solution methods can be used to increase operational robustness, enforce equal aircraft utilization, and decrease aircraft leasing costs. Our tail assignment system is currently in use at two medium-sized airlines.


September 8, 2005,
13:15 pm

Room: Wigforssalen, Högskolan i Halmstad

Public defence of PhD thesis by Björn Åstrand, Computer Engineering, Chalmers University of Technology:

Vision-based Perception for Mechatronic Weed Control

Faculty opponent: Professor David Slaughter, Department of Biological and Agricultural Engineering of UC Davis, University of California, USA


The use of computer-based signal processing and sensor technology to guide and control different types of agricultural field implements increases the performance of traditional implements and even makes it possible to create new ones. This thesis increases the knowledge on vision-based perception for mechatronic weed control. The contributions are of four different kinds:

First, a vision-based system for row guidance of agricultural field machinery has been proposed. The system uses a novel method, based on the Hough transform, for row recognition of crop rows.

Second is a proposal for a vision-based perception system to discriminate between crops and weeds, using images from real situations in the field. Most crops are cultivated in rows and sown in a defined pattern, i.e. with a constant inter-plant distance. The proposed method introduces the concept of using these geometrical properties of the scene (context) for single plant recognition and localization. A mathematical model of a crop row has been derived that models the probability for the positions of consecutive crops in a row. Based on this mathematical model two novel methods for context-based classification between crops and weeds have been developed. Furthermore, a novel method that combines geometrical features of the scene (context) and individual plant features has been proposed. The method has been evaluated in two datasets of images of sugar beet rows. The classification rate was 92 % and 98 %, respectively.

The third contribution is the design of a mobile agricultural robot equipped with these perception systems and a mechanical weeding tool intended for intra-row weed control in ecologically cultivated crops.

The fourth contribution is a demonstration of the feasibility of the perception systems in real field environments, especially with respect to robustness and real-time performance. The row guidance system has been implemented in three different row cultivators and performed inter-row weed control at two commercial farms. The robot has proven to be able to follow a row structure by itself, while performing weed control within the seed line of a crop row, i.e. intra-row cultivation.

June 9, 2005,
13:15 pm

Room: EF, Hörsalsvägen 11, Chalmers tekniska högskola, Göteborg

Public defence of PhD thesis by Jonny Vinter, Computer Engineering, Chalmers University of Technology:

On the Effects of Soft Errors in Embedded Control Systems

Faculty opponent: PhD and Principal Research Scientist Zbigniew Kalbarczyk, Coordinated Science Laboratory, University of Illinois


This thesis investigates techniques for making closed loop control systems fault-tolerant and robust with respect to soft errors occurring in the computer hardware. Soft errors are caused by transient faults that alter the binary values stored in latches, flip-flops and other state elements without causing any permanent damage to the hardware. Soft errors caused by ionizing particles such as high energy neutrons are expected to become a dominating source of hardware failures in future digital circuits. Software implemented techniques for detecting and tolerating soft errors for closed loop control systems are proposed and evaluated. These software techniques are designed to serve as a complement to hardware implemented error detection and correction mechanisms that are present in most computer systems. The objective is to provide a software layer of fault-tolerance mechanisms that can detect, mask or recover from soft errors that escape the hardware mechanisms. Fault injection experiments with control systems for both a four-stroke combustion engine and a jet engine show that a majority of the soft errors (single bit-flips) in CPU-registers and memory have no or minor impact on the behavior of the engines. However, the experiments also show that a small but significant number of the errors result in critical engine failures. These critical failures are predominantly caused by soft errors affecting the state variables of the control algorithm. We present the design and validation of two error detection and recovery techniques called Best Effort Recovery and the Robust Integrator. These techniques are designed to protect the controller state and are experimentally validated by fault injection experiments. The Best Effort Recovery technique performs a rollback recovery if the state variables or the control output are outside defined value bounds. The Robust Integrator is constructed as a generic component in a tool for model-based design and can thus be used for robust implementation of a wide range of control algorithms. To validate these techniques, we have developed a new fault injection tool called GOOFI (Generic Object-Oriented Fault Injection). The tool has been designed to be easily adaptable to different target systems and simple to extend with new fault injection techniques and fault models.

Keywords: Embedded Control Systems, Fault Tolerance, Soft Errors, Concurrent Error Detection, Error Recovery Techniques, Dependable Software, Fault Injection, Pre-Injection Analysis.

June 8, 2005,
13:15 pm

Room: HC2, Hörsalsvägen 11, Chalmers tekniska högskola, Göteborg

Public defence of PhD thesis by Roger Johansson, Computer Engineering, Chalmers University of Technology:

On distributed control-by-wire systems for critical applications

Faculty opponent: Professor Martin Törngren, Institutionen för maskinkonstruktion, Kungliga tekniska högskolan


The implementation of closed loop control systems is an important field for the exploitation of embedded real-time computer systems. One can find a particularly interesting class of closed loop control applications within vehicle dynamics control. For example, within automotive electronics engineering we refer to drive-by-wire control systems as an automotive class of applications where there are no mechanical, pneumatic or hydraulic physical connections between the steering wheel, pedals, wheels and engine control. We commonly refer to such solutions as control-by-wire applications. Control objects in ground vehicles such as passenger cars and railway cars are in most cases physically distributed. A distributed computer system solution therefore becomes an intuitive choice for the computer control. When inherent redundancy is then present due to the distribution of the control system, the objective becomes to take full advantage from it. This avoids introducing excessive redundancy for fault-tolerance, thus giving a cost-efficient implementation.

In this thesis, we present a fault-tolerant computer architecture suitable for critical distributed control. Based on the architecture, we further propose a computer-based brake-by-wire control system where the design alternatives are heavily restricted by cost-efficiency requirements. The thesis gives a novel approach to the implementation of cost efficient dependable electronic systems. The basic ideas of taking advantage of application inherent redundancy are adopted in a non-complex hardware device. Simulation of the device has shown suitability for intended purposes. The resulting overhead of complexity introduced by this device has been analysed; cost and performance has been estimated.

The demands put upon the real-time communication system are significant for emerging distributed control applications. The paradigm of distributed information processing adopted within distributed control has an important impact on the underlying communication system. We start with identifying requirements on the communication network from a control system point of view. Then, we outline a research platform for evaluation of communication protocols.

Recent real-time communication protocols support both event-triggered and time-triggered communication. Thus, we may anticipate hybrid scheduling methods for future engineering needs. Therefore, we discuss valid criteria for practical usefulness of contemporary real-time communication protocols. We identify methods for timing analysis and scheduling of communication between nodes in critical systems.

Keywords: dependable systems, distributed systems, control-by-wire, inherent redundancy, fault-tolerance, time-triggered communication

June 1, 2005,
14:15 pm

Room: HC2, Hörsalsvägen 14, Göteborg

Public defence of PhD thesis by Kristofer Johannisson, Computing Science, Göteborg University:

Formal and Informal Software Specifications

Faculty opponent: Professor Heinrich Hußmann, Ludwig-Maximilians-Universität München, Institut für Informatik, LFE Medieninformatik, München, Tyskland.


The topic of this thesis is to bridge the gap between formal and informal software specifications. Formal specifications are required for the use of formal methods to verify the correctness of software. If we expect formal methods to be used in realistic software development projects, we need to enable people with varying levels of familiarity with formal specification languages to understand, maintain and create formal specifications.

To address these problems, we provide a tool for translating specifications written in the formal language OCL, a substandard of UML, to natural language. We also provide a multilingual, syntax-directed editor where OCL and natural language specifications can be edited in parallel.

The implementation of our work is to a large extent based on the Grammatical Framework (GF). GF is a grammar formalism based on type theory, which provides a special purpose language for defining grammars, and a compiler for this language. We have developed a GF grammar for specifications in OCL and natural language. The grammar captures the OCL type system, its built-in constructions and the predefined types of the OCL library. It is dynamically extended with domain-specific concepts by generating GF grammar modules from UML class diagrams. The generated modules make use of a grammar-level API of common constructions, which means that these modules can be modified without requiring GF expertise. To improve the readability of the translation of OCL specifications, the grammar includes formatting of the produced natural language. Inspired by Natural Language Generation techniques like aggregation, we also apply transformations to abstract syntax trees using a program external to the GF grammar.

Our tool is a part of the KeY system, which integrates formal software specification and verification into the industrial software engineering processes. The tool has successfully been used for a non-trivial case study: translating the OCL specifications of the Java Card API into English.


April 20, 2005,
1:15 pm

Room: EA, Hörsalsvägen 11, Chalmers tekniska högskola, Göteborg


Public defence of PhD thesis by Stefan Lindskog, Computer Engineering, Chalmers University of Technology:

Modeling and Tuning Security from a Quality of Service Perspective

Faculty opponent: Professor Svein Knapskog, Department of Telematics, Faculty of Information Technology, Mathematics and Electrical Engineering, Norwegian University of Science and Technology


Security has traditionally been thought of as a system or network attribute that was the result of the joint endeavors of the designer, maintainer and user, among others. Even though security can never reach a level of 100%, the aim has been to provide as much security as possible, given the boundary conditions in question. With the advent of, e.g., many low-power computing and communication devices it has become desirable to trade security against other system parameters, such as performance and power consumption. Thus, in many situations, tunable or selectable security, rather than maximal security, is desirable. The overall focus of this thesis is therefore how security with a tunable level could be achieved and traded against other parameters.

To this end, basic security primitives, such as the intrusion process, flaws, and impairments, are studied. This contributes to a deeper understanding of fundamental problems and paves the way for security modeling. This part of the work provides a great deal of experimental data that are also used for modeling purposes. Attempts to model and systemize security are made based on the knowledge thus achieved. The relation between security and dependability is touched upon, and the use of physical separation to achieve certain desirable security properties is pointed out. However, most of the modeling research is devoted to suggesting methods for achieving different security levels, i.e., tuning security, in particular for networked applications. Here, the widespread Quality of Service (QoS) concept turns out to be a proper means to embed this novel concept, and a taxonomy for tunable data protection services is suggested. Two data protection services are developed in order to test and verify the concept of tunable security. The evaluations are limited to networked applications and confidentiality through selective encryption schemes. The tests show good agreement between experimental and theoretical results.

It is clear that future applications will require security that can be set to a desired level in order to optimize total system performance. This thesis shows that this is possible and gives some ideas as to how selectable security can be generally attainable.

April 1, 2005,
1:15 pm

Room HC1, Hörsalsvägen 14, Chalmers tekniska högskola, Göteborg

Public defence of PhD thesis by Peng Hui Tan, Computer Engineering, Chalmers University of Technology:

Simplified Graphical Approaches for CDMA Multi-User Detection, Decoding and Power Control

Faculty opponent: Professor Giuseppe Caire, Institut Eurecom, Sophia Antipolis, France


The individual optimal detector for code-division multiple-access (CDMA) systems is based on the marginal a posteriori distribution of the transmitted bits. The marginalization is in general hard since a summation is required, which grows exponentially in scope with the number of active users. Based on graphical representations of probability distribution, message-passing schemes like the belief propagation algorithm (BP) have become effcient tools for computing approximate marginal distribution. It is, however, well-known that the BP algorithm does not always converge when the graphical representation has loops. Solving the Lagrangian dual problem of minimizing the Bethe free energy using simple block nonlinear Seidel and Jacobi algorithms correspond to serial and parallel message updating, respectively, of the BP algorithm. Using the block nonlinear Gauss-Seidel iteration, corresponding to serial updating, convergence is guaranteed since the dual of the Bethe free energy is concave.

The parallel and serial BP algorithms provide low computational alternatives for approximating the individual optimum decision for both uncoded and coded CDMA systems. Using Gaussian approximation, the large system bit error rate (BER) performance of these algorithms is closely related to the replica analysis for the individual optimal detector. Moreover, it can be shown that the parallel and serial BP algorithms converge to the same estimated large system BER performance. From the analysis of the BP algorithms, we obtain functions describing approximately the large system BER performance for multiuser detection and iterative decoding. A power control problem is formulated using these functions, aiming to minimize the total power transmitted over a given discrete set of power levels, subject to maintaining an acceptable BER performance for each user. This problem is hard, so a suboptimal algorithm is proposed, which is of polynomial computational complexity.

Keywords: Code-division multiple access, belief propagation, multiuser
detection, iterative decoding, power control, large system analysis.


March 18, 2005,
10:15 am

Room: HC2, Hörsalsvägen 14, Chalmers tekniska högskola, Göteborg

Public defence of PhD thesis by Wojciech Mostowski, Computing Science, Chalmers University of Technology:

Formal Development of Safe and Secure Java Card Applets

Faculty opponent: Professor Arnd Poetzsch-Heffter, University of Kaiserslautern


This thesis is concerned with formal development of Java Card applets. Java Card is a technology that provides a means to program smart cards with (a subset of) the Java language. In recent years Java Card technology gained great interest in the formal verification community. There are two reasons for this. Due to the sensitive nature (e.g., security, maintenance costs) of Java Card applets, formal verification for Java Card is highly desired. Moreover, because of the relative simplicity of the programming language, Java Card is also a feasible target for formal verification. The formal verification platform that we used in our research is the KeY system developed in the KeY Project. One of the main objectives of our research is to find out how far formal verification for industrial size Java Card applets goes, in terms of usability, automation, and power (expressivity of constraints). Furthermore, we investigated practical and theoretical shortcomings of the verification techniques and development methods for Java Card applets. As a result, we adapted a program logic for Java Card to be able to express interesting, meaningful safety and security properties (strong invariants) and proposed design guidelines to support and ease formal verification (design for verification). We performed extensive practical experiments with the KeY system to justify and evaluate our work.

Formal aspects of our research concentrate on source code level verification of Java Card programs with interactive and automated theorem proving. Our work has been driven by certain assumptions, motivated by the KeY Project's philosophy: (1) formal verification should be accessible to software engineers without years of training in formal methods, (2) we should be able to perform full verification whenever needed, i.e., we want to handle complex Java Card applets that involve Java Card specific features, like atomic transactions and object persistency, (3) the verified code should not be subjected to translations, simplifications, intermediate representations, etc., and finally, (4) the properties that we prove should relate to important safety and security issues in Java Card development. We relate to these goals in our work.

Keywords: Java Card, object-oriented design, formal specification, formal verification, Dynamic Logic


March 7, 2005,
10:00 am

Room: EC, Hörsalsvägen 11, Chalmers tekniska högskola, Göteborg.

Public defence of PhD thesis by Boris Koldehofe, Computing Science, Chalmers University of Technology:

Distributed Algorithms and Educational Simulation/Visualisation in Collaborative Environments

Faculty opponent: Professor Luís Rodrigues, Department of Informatics, Faculty of Sciences, University of Lisbon


This thesis examines a general class of applications called collaborative environments which allow in real-time multiple users to share and modify information in spite of not being present at the same physical location. Two views on collaborative environments have been taken by examining algorithms supporting the implementation of collaborative environments as well as using collaboration to support educational visualisation/simulation environments in the context of distributed computing.

Important algorithmic aspects of collaborative environments are to provide scalable communication which allows interest management of processes and deal with modification of information among collaborators in an efficient and consistent manner. However, to support real-time interactions one may need to trade strong reliability guarantees for efficiency and algorithms which can scale to a large number of processes and offer reliability expressed with probabilistic guarantees.

Besides proposing and evaluating peer-to-peer algorithms on support for large scale event dissemination, this work considers the impact of buffer management and membership in achieving stable performance even under critical system parameters. Moreover, we examine consistency management as well as interest management in combination with lightweight peer-to-peer dissemination schemes. Lightweight cluster consistency management allows a dynamic set of processes to perform updates on a set of processes which is observed in optimistic causal order. Topic-aware membership management supports lightweight dissemination schemes to propagate events which correspond only to their interest by providing a fair work distribution are the same time.

The second part deals with collaboration in the context of educational simulation/visualisation. We present and evaluate a visualisation/simulation environment for distributed algorithms called LYDIAN which helps studying distributed algorithms and protocols. Besides looking at what environments such as LYDIAN should provide in order to be successfully used in class, we consider the use of collaboration in providing more interactivity in simulation environments as well as a possibility for new visualisations of learning concepts to evolve.


February 8, 2005,
10:15 am

Room: HC2, Hörsalsvägen 14, Chalmers tekniska högskola, Göteborg.

Public defence of PhD thesis by Niklas Eén, Computing Science, Chalmers University of Technology:

SAT Based Model Checking

Faculty opponent: Professor Wolfgang Kunz, Technische Universität Kaiserslauten, Germany


This Thesis is a study of automatic reasoning about finite state machines (FSMs). Two techniques used in hardware verification are presented. In both, the verification is carried out by a translation of the problem into propositional logic. Satisfiability and validity of propositional formulas are decided by the use of a SAT solver. For this reason, the fundamental techniques of a modern SAT solver are also presented.

The material belongs in the research field of symbolic model checking (SMC). The field comprises different methods of verifying (temporal) properties of finite systems, such as hardware designs, with a high degree of automation. The scope of current methods, and the level of automation, is such that SMC is frequently applied in industry.

One way to prove a property of a system is to explicitly enumerate all reachable states, and check the property for each one. This is known as explicit state model checking. SMC, on the other hand, works by reasoning symbolically about the system, using a compact representation of sets of states. There is no direct relation between the size of a set and its representation, which gives SMC the potential of handling systems with very large state spaces.

Conventional SMC is carried out by using binary decision diagrams
(BDDs), a canonical representation of boolean functions, to compute and represent subsets of the state space. Although algorithms based on BDDs have been successful in many applications, there are limitations that cannot easily be overcome. In this Thesis, alternative approaches based on SAT are explored, in the hope of removing some of those limitations.

The first paper in the Thesis shows how reachability analysis (computing a representation of the reachable states) can be performed in much the same way as for BDDs, using a non-canonical representation of boolean functions. The method includes a translation from quantified boolean formulas (QBFs) to propositional formulas, and the use of a SAT solver for termination checks.

The second paper shows how safety properties can be proven by means of temporal induction (also known as k-induction). Several improvements are made to previous techniques, in particular by the introduction and novel use of an incremental SAT solver. The performance gain is documented by thorough testing.

The third paper documents in detail how a modern SAT solver is constructed and suggests some extensions. It shows how arbitrary boolean constraints can be added to a SAT solver, and also implements an incremental SAT interface.

The fourth and final paper proposes a solution to the important problem of generating good SAT encodings of domain specific problems. The approach is general in the sense that it is not limited to the typical translation from netlists, often used in hardware verification.



January 31, 2005, 1:15 pm

Room: EB, EDIT-building, Rännvägen 6B
Chalmers tekniska högskola, Göteborg

Public defence of PhD thesis by Stefan Axelsson, Computing Science, Chalmers University of Technology:

Understanding Intrusion Detection Through Visualisation

Faculty opponent: Professor John McHugh, CERT Coordination center, Software Engineering Institute, Carnegie Mellon University, Pittsburgh, USA.


With the ever increasing use of computers for critical systems, computer security, the protection of data and computer systems from intentional, malicious intervention, is attracting much attention. Among the methods for defence, intrusion detection, i.e. the application of a tool to help the operator identify ongoing or already perpetrated attacks has been the subject of considerable research in the past ten years. A key problem with current intrusion detection systems is the high number of false alarms they produce. This thesis presents research into why false alarms are and will remain a problem and proposes to apply results from the field of information visualisation to the problem of intrusion detection. This was thought to enable the operator to correctly identify false (and true) alarms, and also aid the operator in identifying other operational characteristics of intrusion detection systems. Four different visualisation approaches were tried, mainly on data from web server access logs. Two direct approaches were tried; where the system puts the onus of identifying the malicious access requests on the operator by way of the visualisation. Two indirect approaches were also tried; where the state of two self learning automated intrusion detection systems were visualised to enable the operator to examine their inner workings. This with the hope that in doing so, the operator would gain an understanding of how the intrusion detections systems operated and whether that level of operation, and the quality of the output, was satisfactory. Several experiments were performed and many different attacks in web access data from publicly available web servers were found. The visualisation helped the operator either detect the attacks herself and more importantly the false alarms. It also helped her determine whether other aspects of the operation of the self learning intrusion detection systems were satisfactory.

Keywords: Computer security, intrusion detection, visualisation, usability, human computer interaction.



December 21, 2005,
at 10:15 am

Room EC, ED&IT building,
Rännvägen 6B, Chalmers University of Technology

Licentiate seminar by Hans Svensson, Computing Science, Chalmers University of Technology:

Verification of Erlang Programs using Testing and Tracing

Discussion leader: Lars-Åke Fredlund, LSIIS, Facultad de Informàtica, Universidad Politécnica de Madrid.


Producing reliable computer programs is a difficult and expensive task, and the constant demand for more and more complex systems does not make the task easier. In this thesis we present some program verification tools and techniques which can improve the situation.

We introduce a testing technique, where the idea is to first run the system with some test data, thereby creating traces of events and states. Then, using an abstraction function specified by the user, an abstract state transition diagram of the system is automatically generated from the traces. We have created a special language for expressing the abstraction functions. The abstract state transition diagram can be nicely visualized and thus greatly helps in debugging the system. Lastly, formal requirements of the system specified in temporal logic can be checked automatically for the abstract state transition diagram.

We present a new implementation of a leader election algorithm used in the generic leader behavior known as gen_leader. The new implementation is based on a formally verified algorithm, which has been adopted to fulfill the existing requirements. The testing technique with traces and abstractions has been used to check the implementation we propose here. We also used the new testing tool QuickCheck to further increase our confidence in the implementation of this very tricky algorithm. The new implementation passed all tests successfully.

We propose an extension to an existing formal single-node semantics for Erlang. The extension models the concept of nodes. The motivation is that there are sequences of events that can occur in practice, but are impossible to describe using a single-node semantics. The consequence is that some errors might not be detected by model checkers based on the existing semantics, or by other single-node verification techniques such as testing. Our extension is modest; it re-uses most of the existing semantics, but adds a new top-layer.

We present a case study in which we test the implementation and adaptation of a formally verified algorithm. Algorithms described in the literature can often be used to solve practical, industrial problems. Nevertheless, we observe little transfer of algorithms from research papers into products. Most of the time, algorithm modifications are necessary to meet all requirements. We propose adaptation and testing of formal properties as the cheapest way to check the correctness of the modifications, since performing a formal proof seems unrealistic for industrial systems. We show how the properties stated in the articles guide our tests.


August 29, 2005,
13:15 pm

Room: ES52, Rännvägen 6, Chalmers tekniska högskola, Göteborg

Licentiate seminar by Daniel Andersson, Computer Engineering, Chalmers University of Technology:

The Impact of the Skin Effect in Deep Submicron Integrated Circuits

Discussion leader: Professor Atila Alvandpour, Institutionen för Systemteknik, Linköpings universitet


Since the first chip was manufactured in a CMOS technology there has been a drive to shrink dimensions of both wires and transistors. The reason is that down-scaling enables designs with more functionality and higher performance. However, with the 0.25um process, interconnects became as important as the gate delay. Today interconnects are the limiting factor for chip performance. This requires that more attention has to be paid to the behavior of interconnects.

Until the recent years, RC models have been used to capture interconnect behavior. However, today when designing high performance circuits, RC models are not accurate since inductance (L) is becoming significant. In some cases not even RLC models are accurate enough to capture interconnect behavior because of secondary effects that affect interconnects. One of these secondary effects is the skin effect. The skin effect is a phenomenon where high frequency current tend to crowd at the surface of the interconnect, due to the inductive effects. This results in a frequency-dependent resistance that increases with frequency.

In this thesis the impact of the skin effect to the interconnect delay is investigated. For a scenario where interconnect length and width are varied along with the driver strength, the skin effect is shown to give a substantial excess delay to the interconnect delay. This excess delay is also shown to have a large impact on buffer design, thus a repeater insertion method that takes the skin effect into account is presented.


June 20, 2005,
10:15 am

Room: EB, EDIT-building, Rännvägen 6B, Chalmers University of Technology

Licentiate seminar by Jan-Willem Roorda, Computing Science, Chalmers University of Technology:

Symbolic Trajectory Evaluation using a Satisfiability Solver

Discussion leader: Joel Ouaknine, Oxford University Computing Laboratory, Oxford, UK


SAT-based STE is a useful complement to BMC and BDD-based STE in industrial settings. This thesis improves upon SAT-based STE.

The three main contributions of this thesis are:

1. We give a novel semantics for STE that is more faithful to the proving power of existing STE algorithms than earlier described semantics for STE. In this forwards closure semantics, we use the concept of forward closure functions as circuit models. The idea is that a forwards closure function takes as input a three-valued state of the circuit, and calculates all information about the circuit state at the same point in time that can be derived by propagating the information in the input state in a forwards fashion.

2. Using the closure semantics as a basis, we developed a novel constraint-based algorithm for STE. The idea is that a set of constraints is generated with solutions the set of trajectories satisfying the antecedent but not the consequent. So, each solution to the generated constraints is a counter example for the STE assertion. These constraints can be directly translated into SAT-clauses, yielding a SAT-based STE algorithm. Our constraint-based algorithm outperforms existing simulation-based algorithms for SAT-based STE on a series of benchmarks.

3. The dual-rail encoding is traditionally used in STE algorithms to represent three-valued expressions by two Boolean expressions. We show how a SAT-solver can be adapted to a three-valued SAT-solver that can deal with STE's three-valued logic directly, without using a dual-rail encoding. The constraints for an STE assertion can then be directly mapped to three-valued SAT-problems. This leads to smaller SAT-problems, which in some cases are easier to solve.


June 10, 2005,
10:15 am

Room: EA, EDIT-building, Rännvägen 6B, Chalmers University of Technology

Licentiate seminar by Nils Anders Danielsson, Computing Science, Chalmers University of Technology:

Precise Reasoning About Non-strict Functional Programs
- How to Chase Bottoms, and How to Ignore Them

Discussion leader: Dr Ross Paterson, Department of Computing, School of Informatics, City University, London, UK


This thesis consists of two parts. Both concern reasoning about non-strict functional programming languages with partial and infinite values and lifted types, including lifted function spaces.

The first part is a case study in program verification: We have written a simple parser and a corresponding pretty-printer in Haskell. A natural aim is to prove that the programs are, in some sense, each other’s inverses. The presence of partial and infinite values makes this exercise interesting. We have tackled the problem in di®erent ways, and report on the merits of those approaches. More specifically, first a method for testing properties of programs in the presence of partial and infinite values is described. By testing before proving we avoid wasting time trying to prove statements that are not valid. Then it is proved that the programs we have written are in fact (more or less) inverses using first fixpoint induction and then the approximation lemma.

Using the proof methods described in the first part can be cumbersome. As an alternative, the second part justifies reasoning about non-total (partial) functional languages using methods seemingly only valid for total ones. Two languages are defined, one total and one partial, with identical syntax. A partial equivalence relation is then defined, the domain of which is the total subset of the partial language. It is proved that if two closed terms have the same semantics in the total language, then they have related semantics in the partial language.

Keywords: Functional programming, equational reasoning, non-strict,
partial, infinite, lifted, inductive, coinductive


June 7, 2005,
13:15 pm

Room: Wigforssalen, Högskolan i Halmstad

Licentiate seminar by Urban Bilstrup, Computer Engineering, Chalmers University of Technology:

Design Space Exploration of Wireless Multihop Networks

Discussion leader: Tekn. Dr Jan Nilsson, Institutionen för informationsöverföring, Totalförsvarets forskningsinstitut


This thesis explores the feasible design space of wireless multihop networks and identifies fundamental design parameters. In the process of exploring it is important to ignore all details and instead take a holistic view. This means that all protocol details are overseen, all details of radio wave propagation models are overseen and the system is modelled strictly on an architectural level. From a theoretical information perspective, there is a limit to the capacity that a certain bandwidth and a certain signal-to-noise ratio at the receiver can provide. This limit is approximated as a volume in the time-frequency-space domain. A single transmission is represented as an occupied volume in this domain. A wireless multihop network covers a spatial area, and the question is how multiple numbers of transmission volumes can be fit into a given limited spatial area. This volume fitting should be done in order to maximize the overall performance or to trade available resources to favour a specific characteristic in the wireless multihop network. The volume model is used for the design space exploration of a wireless multihop network. It is argued that the fault tolerance and the energy gain achieved in a multihop topology are its strength as compared to a single-hop architecture. It is further shown that the energy gain is achieved at the expense of delay and a greater end-to-end error probability. This indicates that these parameters must be very carefully balanced in order to gain in the global overall performance perspective. It can further be concluded that the overall spatial capacity is increased as a result of the spatial channel reuse in a multihop topology. On the other hand, it is also shown that the multihop topology introduces a rather stringent geometrical capacity limitation when the number of nodes of a wireless multihop network is increased. The dynamics (e.g. node mobility, changing radio channels etc.) of a large scale wireless multihop network is also a limiting factor. The nodes’ mobility creates a knowledge horizon beyond which very little can be known about the present network topology.

Keywords: Wireless multihop networks, wireless sensor networks, ad hoc networks, design space, wireless embedded networks, packet radio networks


May 27, 2005,
9:30 am

Room: EL 41, Rännvägen 6B, Chalmers, Göteborg

Licentiate seminar by Magnus Almgren, Computer Engineering, Chalmers University of Technology:

Intrusion Detection and Protection of Application Servers

Discussion leader: Dr Steven Furnell, University of Plymouth, Plymouth, U.K.


The protection of application servers using intrusion detection and other related techniques is studied in this thesis. A thorough review is first made of taxonomies for intrusion detection systems (IDSs) and how these can help to understand the basic functionality and problems of intrusion detection. A lightweight IDS with a number of interesting features has been developed and tested in real-life situations. I have also studied the consequences of letting such a tool be integrated into an application server rather than keeping it separate from the monitored application, as is common in traditional host-based or network-based systems. Integration enables several advantages, such as the ability to monitor encrypted transactions, an Achilles' heel in traditional systems. I also studied a number of extensions and further developments to intrusion detection. I have developed an intrusion tolerant architecture that not only detects intrusions but also provides a means to tolerate them with a graceful degradation of the offered service. The intrusion tolerance is achieved by leveraging methods from the fault-tolerant community. Finally, I suggest a method for facilitating the set-up and training of IDSs based on active learning algorithms. Considerable performance improvements can be achieved in this way, as shown in the experiments done in this work.

Keywords: Computer security, intrusion detection, application-based intrusion detection, intrusion tolerance, active learning


April 22, 2005,
10:15 am

Room: Theatre Wigfors at Visionen, Halmstad University, Halmstad

Licentiate seminar by Sacki Agelis, Computer Engineering, Chalmers University of Technology:

Reconfigurable Optical Interconnection Networks for High-Performance Embedded Systems

Discussion leader: Dr. Håkan Forsberg, SaabTech, Sweden


In embedded computer and communication system the capacity demand for interconnection networks is increasing continuously in order to achieve high-performance systems. Recent breakthroughs show that by using reconfigurability inside a single chip substantial performance gains can be added. However, in this thesis the focus is on system level reconfigurability (between chips or modules) and the performance gains that potentially can be achieved by having support for runtime reconfigurability on the system level.

This thesis addresses the field of runtime system level reconfigurability with the use of optics in switches and routers for data- and telecommunications, and in multi-processor systems used for embedded signal processing. Several reconfigurable systems for switching and routing with support to adapt for asymmetric traffic patterns are proposed and compared to identify how design choices affect flexibility, performance etc. The proposed solutions are characterized by their multistage optical interconnection networks with reconfigurable shuffle patterns, where the reconfigurability is provided by micro-optical-electrical mechanical systems. More specifically, application-specific bottlenecks can be resolved by reconfiguring the interconnection network according to the current application demands. The benefits of the architectural solutions are confirmed by simulations that clearly show that the architectures can achieve high performance for both symmetric application characteristics and for several classes of asymmetric application characteristics. The final architectural solution is characterized by electronic packet-switches interconnected through an optical backplane, which is reconfigurable. Moreover, the thesis presents how several signal processing applications can be mapped to run concurrently in a time-shared scheme on a single reconfigurable multi-processor system that has high flexibility to adapt for the application currently at hand. The interconnection network is then adapted (reconfigured) according to the demands of the currently executed application in each time instance. The analysis shows that it is feasible to build such a system with today’s components.

Keywords: MOEMS, micro-optical-electrical mechanical systems, reconfigurable interconnection networks, data communication, telecommunication, radar signal processing, asymmetric application, symmetric application, STAP, SAR, embedded systems, VCSEL, parallel processing system, optical communication.


March 23, 2005,
10:15 am

Room: EL41, EDIT-building, Rännvägen 6B, Chalmers tekniska högskola, Göteborg


Licentiate seminar by Anders Nilsson, Computer Engineering, Chalmers University of Technology:

New Code Design and Analysis Tools for Serially Concatenated Systems

Discussion leader: Dr. Ingmar Land, Department of Communication Technology, Institute of Electronic Systems, Aalborg University


In-line bit interleaving has over the last decade been used in a number of papers, and has proven to be very effective. Although it has several advantages over conventional bit interleaving, the use of conventional bit interleaving is still far more common. One reason for this is that the advantages of in-line bit interleaving are not widely recognized yet. Another reason is that the most commonly used code design principles cannot be used with in-line bit interleaving. Hence, in this thesis we will propose new code design principles that can be used for systems with in-line bit interleaving. Design rules will be proposed for both the inner and the outer code of a serially concatenated system. The system considered throughout this thesis is serially concatenated trellis coded modulation (SCTCM) with throughput 2 bits/symbol and 8PSK signaling over the additive white Gaussian noise (AWGN) channel. Several of the new ideas are, however, applicable to any bit interleaved system that use iterative decoding.

The new design rules for the inner code are based upon the characteristics of bit interleaved coded modulation with iterative decoding (BICM-ID). BICM-ID can be seen as an SCTCM system with a one-state inner code. We use this fact and extend the design principles of BICM-ID to be valid for inner codes with more than one state. Furthermore, we will analyze the in-line bit interleaver and explain its advantages. In the analysis, the mutual information (MI) constellation is proposed. The MI constellation becomes a very useful tool in the design of the the outer code. We propose new design rules for the outer code that explains how to design the outer code in respect to the MI constellation.


February 10, 2005,
1:00 pm

Room: EL41, EDIT-building, Rännvägen 6B, Chalmers tekniska högskola, Göteborg

Licentiate seminar by Minh Quang Do, Computer Engineering, Chalmers University of Technology:

Towards a Power and Performance Simulation Framework for Parallel DSP Architectures

Discussion leader: Associate Professor Viktor Öwall, Digital ASIC Group, Lund University of Technology


High performance, low power and low cost will continue to be driving factors for digital signal processor (DSP) and embedded computer systems of the future. Recent improvements of semiconductor technology have led to faster circuits, higher density, and smaller dimensions. However, along with technology scaling there are still many difficult challenges for future electronic and computer system design, one of them is power consumption. There is a lot of research efforts dedicated for reducing both dynamic and static power consumption in all the design levels ranging from the circuit-level to the architecture-level and the algorithm-level. In all these levels performance-power simulation/ estimation tools play a very important, even decisive role.

This thesis describes in detail our research work on designing and implementing an architecture-level cycle-accurate power-performance simulator for parallel DSP architectures (DSP-PP). The DSP-PP uses the suggested White-box Table-based Total Power Consumption (WTTPC) estimation approach offering both rapid and accurate architecturelevel power estimation models for processor components with regular structures (such as SRAM arrays) based on tables of power values. This approach offers relatively simple high-accuracy architecture-level power estimation models accounting for both dynamic
and static power consumption.

The DSP-PP simulator, implemented using C++ and SystemC libraries, simulates an extended version of the “ManArray” parallel DSP architecture. DSP-PP produces performance data (i.e. number of clock cycles) and power estimates while running executable programs. By varying the configuration of the architecture, the user can rapidly explore
the power and performance design space.

Keywords: DSP architecture, Power estimation models, Power estimation simulation.


January 20, 2005,
10.00 am

Room: EC, EDIT-building, Rännvägen 6B, Chalmers tekniska högskola, Göteborg

Licentiate seminar by Anders Gidenstam, Computing Science, Chalmers University of Technology:

Synchronization and consistency in concurrent systems

Discussion leader: Shlomi Dolev, Ben Gurion University of the Negev, Israel.


This thesis investigates aspects of synchronization and coordination in concurrent systems. In such systems synchronization and coordination are highly important as they form a basis for how a set of entities can collaborate to solve a task. In systems without shared knowledge of global time logical clocks provide a way to track how events are related to each other. Small-sized logical clocks with high causal-ordering accuracy is useful, in particular where (i) the precision of the knowledge of the causal dependencies among events implies savings in time overhead and (ii) the overhead of Full Vector clock timestamps is high. We introduce the Non-Uniformly Mapped R-Entries Vector (NUREV) clocks, a general class of plausible clocks that allow accuracy adaptation and we analyse the ways that these clocks may relate causally independent pairs of events. Our analysis resulted in a set of conclusions and the formulation of new, adaptive plausible clocks algorithms with improved accuracy even when the number of clock entries is very small, which is important in peer-to-peer communication systems.

Concerning system services in concurrent systems, such as scheduling and resource allocation, the potential of efficient synchronization methods is large. In particular, in multiprocessor systems certain synchronization methods, such as lock-based ones, may limit the parallelism. It is significant to see the impact of wait/lock-free synchronization design in key services for multiprocessor systems, such as the memory allocation service. Therefore efficient and scalable memory allocators for multi-threaded applications in multiprocessor systems is a significant goal of recent research projects.

We propose a lock-free memory allocator, to enhance the parallelism in such systems. Its architecture is inspired by Hoard, a successful concurrent memory allocator, with a modular and scalable design that preserves scalability and helps avoiding false sharing and heap blowup. Within our effort on designing appropriate lock-free algorithms for the synchronization in multiprocessor systems, we propose a new non-blocking data structure called flat-sets, supporting conventional "internal" operations as well as "inter-object" operations for moving elements between flat-sets. Our experiments indicate that the scalability properties are enhanced even further with the help of the lock-free synchronization, without compromising the other good properties provided by the Hoard architecture.


January 19, 2005,
1:15 pm

Room: ED, EDIT-building, Rännvägen 6B, Chalmers tekniska högskola, Göteborg

Licentiate seminar by Angela Wallenburg, Computing Science, Göteborg University:

Induction Rules for Proving Correctness of Imperative Programs

Discussion leader: Dilian Gurov, Kungliga Tekniska Högskolan, Stockholm


This thesis is aimed at simplifying the user-interaction in semi-interactive theorem proving for imperative programs. More specifically, we describe the creation of customised induction rules that are tailor-made for the specific program to verify and thus make the resulting proof simpler. The concern is in user interaction, rather than in proof strength. To achieve this, two different verification techniques are used.

In the first approach, we develop an idea where a software testing technique, partition analysis, is used to compute a partition of the domain of the induction variable, based on the branch predicates in the program we wish to prove correct. Based on this partition we derive mechanically a partitioned induction rule, which then inherits the divide-and-conquer style of partition analysis, and (hopefully) is easier to use than the standard (Peano) induction rule.

The second part of the thesis continues with a more thorough development of the method. Here the connection to software testing is completely removed and the focus is on inductive theorem proving only. This time, we make use of failed proof attempts in a theorem prover to gain information about the problem structure and create the partition. Then, based on the partition we create an induction rule, in destructor style, that is customised to make the proving of the loop simpler.

With the customised induction rules, in comparison to standard induction or Noetherian induction, the required user interaction is moved to an earlier point in the proof which also becomes more modularised. Moreover, by using destructor style induction we circumvent the problem of creating inverses of functions. The soundness of the customised induction rules created by the method is shown. Furthermore, the machinery of the theorem prover (KeY) is used to make the method automatic. The induction rules are developed to prove the total correctness of loops in an object-oriented language and we concentrate on integers.



Seminars 2004
Seminars 2003
Seminars 2002
Chalmers Publication Library

  Map, Chalmers University of Technology

Postal address:
Department of Computer Science and Engineering,
Chalmers University of Technology,
SE-412 96 Göteborg, Sweden

Visiting address: Rännvägen 6B Information

Telephone: +46 (0)31-772 10 00

Webmaster: Catharina Jerkbrant

Last modified: January 9, 2006