|PhD seminars 2002|
18 December 2002, 10:00
Public defence of PhD thesis by Robert Feldt, Computer Engineering, Chalmers University of Technology:
Biomimetic Software Engineering Techniques for Dependability
Hörsalsvägen 14, Göteborg
While we struggle to keep our software dependable despite its increasing complexity, even the smallest biological system in nature shows features of dependability. This thesis applies ideas from and algorithms modeled after biological systems in the research for and development of dependable software.
Based on a theory of software development focusing on the internal models of the developer and how to support their refinement we present a design for an interactive software development workbench where a biomimetic system searches for test sequences. A prototype of the workbench has been implemented and evaluated in a case study. It showed that the system successfully finds tests that show faults in both the software and its specification. Like biological systems in nature exploits a niche in the environment the biomimetic search system exploits the behavior of the software being developed.
In another study we applied genetic programming to evolve programs for an embedded control system. Although the procedure did not show much potential for use in real fault-tolerant software, the program variants could be used to visualize the difficulty of the problem domain, explore the effects of design decisions and trade off requirements.
Taken together the works in this thesis support the claim that biomimetic algorithms can be used to explore requirements, design and test spaces in early software engineering phases and thus help in building dependable software.
biomimetic algorithms, software testing, automated testing, software development
workbench, evolutionary computation, genetic programming, dependability,
software engineering, design exploration
13 December 2002, 10:00
Public defence of PhD thesis by Vilgot Claesson, Computer Engineering, Chalmers University of Technology:
Efficient and Reliable Communication in Distributed Embedded Systems
Hörsalsvägen 4, Göteborg
A desired attribute in embedded systems, with safety and real-time requirements, is the basic capability to coordinate and synchronize system time and events. This directly relates to the establishment of a predictable communication base, which subsequently becomes the basis to provide for predictable communication at the system level. Focusing on bus-based communication protocols, we present a novel synchronization approach targeting efficiency and low communication overhead as the main drivers for TDMA environments. Existing techniques require explicit transfer of node ID information for synchronization. In this novel approach, the synchronization process utilizes the implicit information in each node's unique message length as node identifier. Furthermore, our initial startup synchronization approach is fault-tolerant, and has a bounded startup time. We also present a re-synchronization strategy that incorporates recovering nodes into synchronization.
The event-triggered and the time-triggered media access paradigms, have spawned discrete followings with much debated pros and cons regarding their relative flexibility, bandwidth efficiency and predictability features. The event-triggered approach is commonly perceived as providing high flexibility. Similarly, the time-triggered approach is expected to provide a higher degree of predictable communication access to the media. One part of this thesis is to objectively and quantitatively assess the capabilities and limitations of each of these paradigms. More importantly, we quantify the spread of their differences, and provide system design guidelines for suggested best usage for each approach. The focus of this work is on response times of the communication system, and the schedulability of the communication system in collaboration with tasks in the nodes. Focusing on efficiency, the second component of this thesis, deals with introducing modifications in the time-triggered approach to efficiently accommodate event-triggered communication using the time-triggered operations as a base.
Keywords: distributed embedded systems, synchronization, media access protocols, time-triggered, event-triggered
6 December 2002, 13:00
Public defence of PhD thesis by Pär Moqvist, Transmission Theory, Chalmers University of Technology:
Multiuser Serially Concatenated Continuous Phase Modulation
Hörsalsvägen 14, Göteborg
A multiuser communication system using serially concatenated and randomly interleaved continuous phase modulation (SCCPM) over the additive white Gaussian noise (AWGN) channel is investigated. The users, which may be asynchronous, are allowed to have individual energy levels as well as carrier frequencies and phases. This model incorporates multiple-access signaling similar to direct-sequence code division multiple-access (DS-CDMA), trellis-coded multiple-access (TCMA), and frequency division multiple-access (FDMA) with arbitrary spectral overlap, as well as non-intentional co-channel or adjacent channel interference of the same signaling type. First, the system is analyzed through analytical upper bounds on the average bit error probability for a given user under maximum-likelihood (ML) detection, where the average is over the ensemble of systems over all sets of interleavers. It is shown that in a properly designed system, the bit error probability vanishes for infinite interleaver sizes and a sufficiently large channel signal-to-noise ratio (SNR), regardless of the signal correlation between the users. Thus, even with equal modulation, energy levels, and carrier frequencies and phases, the users can be detected adequately provided they employ random interleaving. The second part of the analysis concerns iterative decoding of multiuser SCCPM. A convergence analysis based on EXIT charts is presented, along with decoding threshold estimates. It is observed that in systems with no frequency offset, the performance of ML detection does not always carry over to iterative decoding. On the other hand, for many other systems excellent performance can be obtained both in terms of power efficiency (bit error rate as a function of SNR) and spectral efficiency (bandwidth). In particular, systems are demonstrated with performance within 1 dB of the average-power limited Shannon capacity at 1 bit/s/Hz, and within 2.3 dB at 2 bits/s/Hz.
November 8, 2002, 10.15
Public defence of PhD thesis by Ana Bove, Computing Science, Chalmers University of Technology:
General Recursion in Type Theory
Room: Hörsalen, Matematiskt centrum, Eklandagatan 86, Göteborg
This thesis deals with the use of constructive type theory as a programming language. In particular, it presents a method to translate general recursive functional programs into their type-theoretic equivalents.
A key notion in functional programming is recursion, which allows that the object being defined refers to itself. Associated with recursion is the problem of termination since, in general, there is no guarantee that a recursive function will always terminate. It is hence very important to have mechanisms to prove that a certain function terminates or even to ensure the termination of the function by the form of its definition. Examples of the latter are structurally recursive definitions, where every recursive call is performed on a structurally smaller argument.
Standard functional programming languages impose no restrictions on recursive programs and thus, they allow the definition of any general recursive function. There exist, though, less conventional functional programming languages where only terminating recursive definitions are allowed. Constructive type theory can be seen as one of them. In addition, logic can also be represented in constructive type theory by identifying propositions with types and proofs with programs of the corresponding type. Therefore, a type can encode a complete specification, requiring also logical properties from an algorithm. As a consequence, algorithms are correct by construction or can be proved correct by using the expressive power of constructive type theory. This is clearly an advantage of constructive type theory over standard programming languages. A computational limitation of type theory is that, to keep the logic consistent and type-checking decidable, only structurally recursive definitions are allowed.
Many important algorithms are general recursive. Although many such algorithms can be proved to terminate, there is no syntactic condition that guarantees their termination and thus, general recursive algorithms have no direct translation into type theory.
In this thesis, a method to formalise general recursive algorithms in type theory that separates the computational and logical parts of the definitions is presented. As a consequence, the resulting type-theoretic algorithms are clear, compact and easy to understand. Given a general recursive algorithm, the method consists in defining an inductive special-purpose accessibility predicate that characterises the inputs on which the algorithm terminates. The type-theoretic version of the algorithm can then be defined by structural recursion on the proof that the input values satisfy this predicate. When formalising nested algorithms, the special-purpose accessibility predicate and the type-theoretic version of the algorithm much be defined simultaneously because they depend on each other. Since the method separates the computational part from the logical part of a definition, formalising partial functions becomes also possible.
18 October 2002, 10.00
Public defence of PhD thesis by Martin Hiller, Computer Engineering, Chalmers University of Technology:
A Software Profiling Methodology for Design and Assessment of Dependable Software
Room: HC2, Hörsalsvägen 14, Göteborg
The advent of computerized consumer products, such as for example automobiles, mobile systems, etc., has produced a large increase in the need for dependable (or robust) systems. As cost is a relevant issue for such systems, the cost of dependability has to be kept low. Furthermore, as the replication of software is virtually free compared to the replication of hardware, the trend is to implement more and more functions in software. This motivates the search for methodologies for cost efficient design and assessment of dependable software.
An established approach for designing dependable software entails addition of error detection mechanisms (EDM's) and error recovery mechanisms (ERM's). The effectiveness of these mechanisms, however, is achieved only if their composition is matched with their placement in locations where they are actually effective. It is the development of a systematic methodology to profile software in order to compose and locate EDM's and ERM's, that this thesis endeavors to achieve.
Presented in this thesis is a set of approaches for profiling software such that the most vulnerable and/or critical modules and signals can be identified in a quantifiable way. The profiling methodology relies on the analysis of error propagation and error effect in modular software. The results obtainable with these profiles indicate where in a given software system, errors tend to propagate and where they tend to cause the most damage as experienced by the environment.
The main contribution of this thesis is a software profiling methodology that encompasses development of the fault injection tool suite PROPANE (Propagation Analysis Environment) and the analysis framework EPIC (Exposure, Permeability, Impact, Criticality---the four main metrics introduced in the framework). The vision is that this contribution can aid software developers in the design and assessment of dependable software in the early stages of development.
14 June 2002, 14.15
Public defence of PhD thesis by Thomas Lundqvist, Computer Engineering, Chalmers University of Technology:
A WCET Analysis Method for Pipelined Microprocessors with Cache Memories
Hörsalsvägen 14, Göteborg
14 February 2002, 13.15
Public defence of PhD thesis by Maarja Kruusmaa, Computer Engineering, Chalmers University of Technology:
Repeated Path Planning for Mobile Robots in Dynamic Environments
Room: Wigforssalen, Halmstad University
The research is motivated by the fact that most real-world applications of mobile robotics imply repeated traverse between predefined target points in large, uncertain real-world domains. The goal of this study is to develop a framework for navigation in large-scale dynamic environments that permits the robot to fulfill its assignment by adapting to use of easily traversable paths.
While mobile robots usually tackle the problem of path planning in uncertain environments by implementing obstacle avoidance routines, in this approach the robot also learns to use routes along which obstacles occur less often. The few alternative approaches that consider the problem of global obstacle avoidance function in well-structured environments using an a priori known topological map. The approach reported in this thesis uses a grid-based map for path planning. It is argued that a grid-based map together with a stochastic wavefront planner offers a greater number of alternatives to the path planning problem and that the robot adapts to be able to use the best routes even when very little a priori knowledge of the environment is available.
To adapt to dynamic environments the robot uses case-based reasoning to remember the paths followed and to reason about their traversability. While case-based reasoning is usually applied for planning problems in static environments, this approach also works in dynamic environments and uses a unique similarity measure to reduce the solution space.
This thesis also offers an alternative decision-making strategy for mobile robots in hazardous environments that is based on the concept of irreversible decisions.
The experiments made in simulated and real environments demonstrate that this approach to global navigation permits the robot to increase the predictability of its behavior and to minimize the risk of collision and time delays.
mobile robot, navigation, case-based reasoning, path planning, dynamic
|Licentiate seminars 2002|
Radar Signal Processing (RSP) is a typical application area. Such a system contains many computation nodes that are interconnected in order to co-operate and thereby achieve higher performance. The computing performance of the whole system is greatly affected by the choice of network. Computing nodes in a parallel computer for RSP should be tightly coupled, i.e., communications cost (e.g. latency) between nodes should be small, so that the whole system can be perceived as a single unit. This is possible if a suitable network with an efficient protocol is used.
There is an industrial need for new high-performance networks with support for the, often heterogeneous, real-time requirements found in (often embedded) applications such as RSP and other areas such as multimedia. The traffic this kind of network can be classified according to its requirements. The proposed protocols partition the traffic into three classes with distinctly different qualities. These classes are traffic with hard real-time demands, such as mission critical commands, traffic with soft real-time demands, such as process data (a deadline miss here only leads to decreased performance) and, finally, traffic with no real-time constraints at all. The contributions of the present thesis are protocols that integrate heterogeneous real-time services for the three traffic classes.
The performance of the proposed protocols is evaluated through simulations and analysis. It is shown that the protocol is an efficient choice for RSP systems. A brief survey of related technologies is included in the thesis. These are studied from the perspectives of application, architecture and user service.
Keywords: Optical, Ring, Pipeline, Distributed, Parallel-Processing, Real-time, SAN (System Area Network), Heterogeneous, Service
The proliferation of the Internet has rapidly increased the number of computers connected to networks. Not only the number of networked computers but network traffic contents and types also change constantly. All these call for larger network bandwidths and better management of available infrastructure. The increased network traffic leads to congestion when the available bandwidths are not large enough to cater for the requirements. Today, congestion in computer networks is a global problem. Countries and organisations, which cannot afford to have the required amount of bandwidth to satisfy the network demands of their users, are severely hit by the congestion.
In order to get the best utilization out of any resource we need to first understand how it is used. Computer networks are no exception. We need to understand the behaviour of network traffic in order to plan for and implement the necessary infrastructure and to identify the reasons for any problem the users may encounter. Measurement is a basic way of understanding the nature of network traffic and evaluating how network resources are utilised.
November 15, 2002, 10.15
Licentiat seminar by Karol Ostrovsky, Computing Science, Chalmers University of Technology:
Higher Order Broadcasting Systems
Room: MD10, Matematiskt centrum, Eklandagatan 86, Göteborg
broadcast is a pervasive style of computer communication. In this style,
the medium is a single nameless channel. Previous work on modelling such
systems proposed CBS. In this dissertation, we propose a fundamentally
different calculus called HOBS. Compared to CBS, HOBS 1) is higher order
rather than first order, 2) supports dynamic subsystem encapsulation rather
than static, and 3) does not require an "underlying language"
to be Turing-complete. Moving to a higher order calculus is key to increasing
the expressivity of the primitive calculus and alleviating the need for
an underlying language. The move, however, raises the need for significantly
more machinery to establish the basic properties of the new calculus.
This dissertation develops the basic theory for HOBS and presents two
example programs that illustrate programming in this language. The key
technical underpinning is an adaptation of Howe's method to HOBS to prove
that bisimulation is a congruence. From this result, HOBS is shown to
embed the lazy lambda-calculus, and partially encode pi-calculus and its
broadcasting version b-pi-calculus.
November 8, 2002, 15.15
Licentiat seminar by Peter Rundberg, Computer Engineering, Chalmers University of Technology:
Data Dependence Speculation Methods to Expose Thread-Level Parallelism
Room: Sal EL43, Linsen, Hörsalsvägen 11, Göteborg
The first method involves techniques to aid automatic tools to extract parallelism in sequential programs. These techniques allow a compiler to identify parallel sections without proving that they are data independent. To provide correctness a run-time system monitors memory references so that they execute in the correct order. A completely software implemented run-time system, called a Data Dependence Speculation system, is the first method. With this method it is shown that speedups of up to ten times are achievable on 16 processors for code that compilers cannot parallelize.
The second method helps a programmer to expose parallelism in critical sections. As the concurrency of a parallel program can be limited by overly conservative synchronizations, this method allows the machine to execute past busy locks in a speculative manner. The machine will ensure correctness by enforcing the programmers intended path of execution. The method is compared to competing methods and it is shown that the unique reordering feature of the method provides enhanced performance and fewer misspeculations.
21 October 2002, 13.00
Licentiate seminar by Peter Gennemark, Computing Science, Chalmers University of Technology:
A model identification algorithm for cell signalling pathways
Room: MD7, Matematiskt centrum, Eklandagatan 86, Göteborg
10 October 2002, 13.15
Licentiate seminar by Josef Svenningsson, Computing Science, Chalmers University of Technology:
Efficient and Accurate Constraint Based Program Analysis, and Aspects of Program Optimization
Room: MD9, Matematiskt centrum, Eklandagatan 86, Göteborg
The first part concerns constraint based program analysis. The first paper in this part tackles the problem of efficiently performing context sensitive program analysis. We develop the notion of constrain abstractions and show how they can be used in constructing an efficient algorithm for computing solutions to context sensitive program analyses. As an example of such we give a flow analysis which with flow subtyping, flow polymorphism, and flow polymorphic recursion and show how it can be implemented in (O(n^3).
The second paper in the first part gives a concrete example of a constraint based context sensitive analysis. It is a usage analysis which keeps track of how many times a part of a program is used. This information can be used for optimising lazy functional programs. The analysis is specifically designed to scale in both precision and speed for large programs. The analysis employs the techniques developed in the first paper to achieve those goals.
The second part concerns fusion based program optimisation. In functional languages intermediate data structures are often used as glue to connect separate parts of a program. Many techniques have been proposed to automatically remove intermediate data structures. In the last paper we study the dual of one of the most successful techniques, short cut fusion. We show that the dual solves some longstanding problems in intermediate data structure removal. Specifically it can handle functions taking several arguments of which all are to be removed and functions consuming their input using accumulating parameters. These situations have previously only been handled by very sophisticated methods.
21 May 2002, 10.00
Licenciate seminar by Martin Kämpe, Computer Engineering, Chalmers University of Technology:
Prediction Methods for Branch and Cache Management in Computers
Hörsalsvägen 4, Göteborg
First in this thesis, a method to predict and adjust the block size dynamically is presented. The basic idea is to predict the amount of contiguous data accessed, based on past behavior, and then load that amount of data upon a cache miss. As the amount of contiguous data varies during program execution, a dynamically adjusted block size in the cache reduces the number of accesses to main memory without introducing a lot of cache pollution. Secondly, I will present a method that identifies and predicts the mistakes performed by the classical LRU replacement algorithm. Associating the mistake with the memory instruction causing the mistake makes it possible to predict and prevent future mistakes caused by the same memory instruction. Hence the efficiency of the replacement algorithm is increased, causing the cache performance to increase.
contribution in this thesis is twofold and in the field of branch prediction.
First, it presents the benefits of taking an abstract view on prediction
by using Fourier analysis rather than time-series based prediction methods
proposed in the literature. Secondly, program behavior causing current
branch predictors to have approximately 90% accuracy is partly identified
by applying Fourier analysis to the branch outcome history. Since a prediction
is based on a relationship between past and future behavior, this relationship
must be found to make a prediction. I show that Fourier analysis can identify
relationships ranging over longer time spans than current branch predictors
can harvest and is able to store this information in a very efficient
16 May 2002, 10.00
Licenciate seminar by Mathias Johansson, Computer Engineering, Chalmers University of Technology:
Video Communication over the Internet
Room: EL43, Linsen, ED-huset, Göteborg
contributes to the realization of a scalable and adaptive framework for
video communication over the Internet by presenting novel algorithms and
methods for multicast flow control and layered video coding. Moreover,
the scope of Internet video is broadened by presenting a procedure for
interconnecting multicast videoconferences with the World Wide Web. Finally,
an enrichment of Internet video is proposed by introducing a system for
stereoscopic video transmission.
26 April 2002, 13.15
seminar by Daniel Eckerbert, Computer Engineering, Chalmers University
in this thesis are two-fold: first, the deficiencies of
What has made the work in this thesis so successful is the way in which power estimation has been approached. Knowledge in the physical implementation of digital systems has allowed for divining new, physically relevant models.
Keywords: VLSI, CMOS, RTL, Power Estimation, Deep Submicron, Short-Circuit, Leakage
16 April 2002, 10.00
seminar by Peter Ljunglöf, Computing Science, Göteborg
This thesis investigates different aspects of the parsing problem from the viewpoint of a functional programmer. It is conceptually divided into two parts, discussing the parsing problem from different perspectives; first as a comprehensive survey of possible implementations of combinator parsers; and second as pure functional implementations of standard context-free parsing algorithms.
The first part of the thesis is a survey of the possible implementations of combinator parsers that have previously been suggested in the litterature, relating their dirrefent usages. A number of previously unknown parser implementations are also given, especially efficient for small and medium-sized natural language applications.
Combinator parsers are not very well suited for large-scale applications, and especially not for large-scale natural language processing. In those cases it is necessary to use other parsing algorithms than The second part of the thesis define elegant and declarative, pure functional versions of some standard parsing algorithms for context-free grammars. The goal has been to implement the algorithms in a way that is close to their intuitive formulations, not sacrificing computational efficiency. The implementations only use simple data structures not relying on a global updateable state, thus opening the way for nice functional implementations.
Finally the thesis implements parser combinators that can collect the grammatical structure in the program, to be able to use any suitable parsing algorithm and not just recursive descent. However, this requires an mildly impure extension of the host language Haskell.
code from the thesis is available at
26 March 2002, 10.00
seminar by Carlos Gonzalía, Computing Science, Göteborg
We set up the necessary abstract definitions of a constructive calculus of relations based on categories (LT-allegories), so as to get an analogue of the corresponding classical system (allegories). The category of sets and relations is the concrete version of allegories, and we prove that its analogue (the LT-category of E-sets and E-relations) satisfies the definitions of the several classes of LT-allegories. However, this fails for the allegories involving complements or power constructions.
To overcome this limitation, we restrict the setting to finite E-sets and decidable E-relations, and prove that this restricted version satisfies all the definitions of the different classes of LT-allegory. We then show why this finite allegory is a good setting for relational database theory, and how could we use our formalisation to deal with database concepts in type theory. We also show how our formalisation allows us to express relational programs inside it, paving the way for our integration goal in future work.
Keywords: programming logic, type theory, relational calculus, relational database theory, categories of relations, formalised mathematics, program calculation.
detection and fraud detection are two research areas, which have developed
in parallel with little or no interaction. This is somewhat surprising,
since it could be argued that, largely, fraud detection is a type of application-specific
intrusion detection, where the goal of the attacker is to benefit from
a commercial service. Thus, it seems probable that many of the findings
achieved for intrusion detection systems (IDS) are applicable to fraud
detection systems (FDS) and vice versa. The thesis studies how these areas
could be successfully merged and what kind of benefits could be achieved
by cross-fertilisation, i.e how we can utilise experience in one area
to push the other area forwards.
with evaluating wait-free snapshot algorithms suitable for
real-time systems. As snapshots are very useful in distributed embedded
systems, we focused on experiment models using the CAN-bus.
We have been doing studies and implementation work with known lock- and wait-free data structures. Some algorithms have been corrected and refined for implementation. Several of these data structures as well as those presented in this thesis have been put together into a library. The functionality of the library is designed in a way to be easily accessible for non-experts in non-blocking protocols. Experiments on multi-processor platforms show significant improvements.
Keywords: real-time, synchronisation, non-blocking, timing information, snapshot, shared memory, lock-free, wait-free
6 March 2002, 14.00
seminar by Håkan Kvarnström, Computer Engineering, Chalmers
University of Technology
The aim of this thesis is to use caches efficiently by improving the utilization of the space devoted to data caches. Two different techniques are proposed where the first technique addresses the issue that increased block sizes usually yield better performancebut is detrimental to the energy usage because of the lower block utilization with larger blocks. The proposal is to use smaller block sizes and compensate for the performance loss using selective software-controlled prefetching. By using a performance-tuning tool, the memory instructions generating the bulk of misses are identified. By inserting prefetch instructions to cover the misses generated by these instructions, it is shown that it is possible to get the performance of larger blocks while keeping the energy usage to that of smaller blocks.
The second technique improves cache space utilization by only allocating cache blocks that will be reused during their lifetime in the cache. As the decision to cache or not is needed ahead of time, the technique employs reuse prediction. Each prediction suggests whether a block should be allocated to the cache or not based on the block's reuse history. The mechanism employed to do the predictions has its roots in two-level branch prediction. It is shown that using this mechanism and a single-entry bypass buffer it is possible to reduce the miss rate by up to 17 % compared to a baseline employing an always allocate strategy.
|Webmaster: Catharina Jerkbrant||
Last modified: 030207