banner CSE

Seminars 2003

PhD seminars 2003

December 18th, 2003, 10:15

Public defence of PhD thesis by Qiao Haiyan, Computing Science, Chalmers University of Technology:

Testing and Proving in Dependent Type Theory

Room: Sal HC3, Hörsalsvägen 14, Chalmers, Göteborg.

Faculty opponent: Professor Mark Jones, Computer Science, Oregon Graduate Institute, USA


We show how random testing, model checking and interactive proving can be combined for functional program verification in dependent type theory.

We extend the proof assistant Agda/Alfa for dependent type theory with a tool for random testing of functional programs, thus combining proving and testing in one system. Testing is used for debugging programs and specifications before a proof is attempted. Proving is used to decompose a goal into subgoals that can be subjected to testing before attempting to make a proof. Proving can guide the testing strategy to test all branches of a case statement and thus make it possible to localize the source of a bug more precisely. We show how some classic data structure algorithms (binary search tree insertion and AVL-tree insertion) are verified in the system. As a further case study, we show how the correctness of a BDD-algorithm written in Haskell is verified by testing properties of component functions.

We then augment the proof assistant Agda/Alfa with a simple tool for model checking boolean formulas using binary decision diagrams. We show two applications of this tool. The first is to prove Urquhart's U-formulas using a combination of interactive proving and testing. The second one is the verification of a sorting algorithm (bitonic sort) using a combination of interactive proving, model checking, and random testing.

Our tool uses test data generators which are defined inside Agda/Alfa. We can therefore use the type system to prove properties about them, in particular surjectivity stating that all possible test cases can indeed be generated. We discuss how surjective generators can be written for a class of inductively defined dependent types by using the correspondence between inductive definitions and logic programs. As an example, we give a generator for theorems in propositional logic.

The last part of the thesis is an advanced case study in formalisation and proving. We describe a formalisation of a reduction-free normalisation function nf for the typed lambda calculus. To extract the normalisation function nf, we develop a constructive category theory, which is called P-category theory, and a formalisation of the typed lambda calculus. From this we prove numerous properties of the typed lambda calculus. Two different approaches to formalising the typed lambda calculus are presented in ALF. Properties of the typed lambda calculus are organised in a categorical way, using the notion of categories with families. It is proved that the two approaches are equivalent as categorical structures.

December 16th, 2003, 13:15

Public defence of PhD thesis by Henrik Eriksson, Computer Engineering, Chalmers University of Technology:

Efficient Implementation and Analysis of CMOS Arithmetic Circuits

Room: Sal HC1, Hörsalsvägen 14, Chalmers, Göteborg.

Faculty opponent: Assistant Professor Borivoje Nikolic, University of California at Berkeley, USA


With the latter part of the last century in mind, it is not hard to imagine that in the foreseeable future there will be a never-ending demand for increased data processing capability. In addition, there will be a larger public wish for a versatile mobile device such as a combined computer, phone, and camera with many hours of operation. In such a world, deep understanding of speed-limiting and power-dissipating phenomena will be crucial for meeting design targets within reasonable time (= money). Designers of these future circuits will be facing many implementation choices so means of guiding them to fast and correct choices should be sought for.

Arithmetic circuits are power-hungry and performance-limiting in applications such as microprocessors and DSPs. By improving the arithmetic circuits, the overall performance can be significantly improved. This thesis will present efficient implementations of CMOS adders and multipliers as well as an investigation on the voltage scaling of glitches in future CMOS process technologies. An extension of the logical-effort delay model (which is an example of a guideline for the designer) to include the gate-input to output transition of pass-transistors in complementary pass-transistor logic circuit is also suggested.

Three efficient adder implementations are proposed. First, one Manchester adder where repeaters and an optimized bypass structure are employed to obtain an area and energy-efficient design. Second, an efficient implementation of the dot-operator cells of parallel-prefix adders, and the third one is rather an accumulator in a special application where it is possible to achieve a lower power dissipation at the cost of extra noise.

A comprehensive comparison of ten different multipliers has been conducted in terms of delay, power, area, and design effort. Among these ten multipliers were new structures, e.g. the RRT (which facilitates the use of bypass circuitry throughout the reduction tree) and a modified Dadda multiplier. Well-known multipliers such as array, overturned stairs, and TDM multipliers were also included in the comparison. To be able to find critical delays in the multipliers, a new way of finding bad-case test vectors were suggested and used.


December 12th, 2003, 13:15

Public defence of PhD thesis by Daniel Eckerbert, Computer Engineering, Chalmers University of Technology:

Power Estimation and Multi-Phase Clock Generation for the Deep Submicron Era

Room: HA2, Hörsalsvägen 4, Chalmers, Göteborg.

Faculty opponent: Associate Professor Wayne Burleson, University of Massachusetts Amherst, USA


Scaling has been the driving force behind the immense development the field of electronics has seen over the last few decades. It has brought us from integrated circuits operating in the kilohertz range in the 1970s to the gigahertz range in just thirty years. Scaling has also brought the number of transistors on a single chip from a few thousand to hundreds of millions in the same time frame. Because of the development we have seen in the past, there are expectations for the field to continue developing at the same pace in the future.

Modern high-performance integrated circuits have a power density that exceeds that of a hot plate. A methodology is presented which increases the accuracy of high-level power estimation which is a requirement for early design-space exploration. The methodology is an extension of traditional register transfer level power estimation methodologies but gains accuracy by being able to distinguish between different power dissipation mechanisms.

It is becoming more and more difficult to increase the clock frequency for integrated circuits as scaling alone will not maintain the development seen in the past. New types of multi-phase clock generation delay-locked loops might prove to be one road to increased performance. A mixed-mode delay-locked loop architecture is proposed with the intent to retain the good properties from both analog and digital implementations. The most important property of the digital architecture that is made available in the mixed-mode architecture is the ability to operate in a clock-gated environment. An implementation of this mixed-mode architecture is also presented which preserves the simple delay elements from analog architectures while at the same time enabling clock gating which was previously reserved for digital architectures.


December 9th, 2003, 13:15

Public defence of PhD thesis by Florian-Daniel Otel, Computer Engineering, Chalmers University of Technology:

Reliable and Deterministic Network Services

Room: HB2, Hörsalsvägen 8, Chalmers, Göteborg.

Faculty opponent: Professor Arne Nilsson, Blekinge tekniska högskola and North Carolina State University, Raleigh, USA


In a post Internet bubble economic environment the obvious lack of any disruptive new technologies that could give new impetus to network growth (as, for example, experienced after the appearance of the Web) lead to a trivialization of network services: Aside grass-roots, semi-subterranean applications like file sharing, entertainment content swapping and on-line gaming, it can be argued that network services are presently used in a manner not very different than a decade ago. In this respect, one of the factors that can be identified as limiting further development is the perceived lack of guarantees with regard to network services.

The articles comprised in this thesis address the subject of network service guarantees from two main perspectives.

In Part I we study network service guarantees from a reliability standpoint. In particular, we address the following issues:

Study what network infrastructure characteristics (e.g. data transmission method properties) are prone to inducing vulnerabilities in the overlaying network services.

Analyze the mechanisms and methods used at network protocol stack level for mitigating impairments like temporary traffic congestion or network element failures.

Propose mechanisms for fast network service level restoration in the wake of network elements failures.

In Part II of the thesis we address the topic of guarantees with respect to the network service performance level. In particular we study and propose mechanisms for guaranteeing deterministic upper bounds on worst-case queuing delays in general packet networks. Using network calculus concepts of traffic envelope and service curve we show how to deliver end-to-end queuing delay guarantees in connection-oriented networks carrying several traffic classes, with multiple packet sizes. Additionally, we measure the efficiency and performance level of such mechanisms and analyze inherent limitations of the proposed method.

Keywords: Network security, TCP, network failure recovery, traffic engineering, constraint-based routing, queuing delay guarantees, network calculus, traffic envelope, arrival curve, service curve.

October 10th, 2003, 13:15

Public defence of PhD thesis by Ulf Assarsson, Computer Engineering, Chalmers University of Technology:

A Real-Time Soft Shadow Volume Algorithm

Room: Sal HC2, Hörsalsvägen 14, Chalmers, Göteborg.

Faculty opponent: Assistant Professor Fredo Durand, Computer Graphics Group, MIT


Rendering of shadows is a very important ingredient in three-dimensionalgraphics since they increase the level of realism and provide cues to spatial relationships. Area or volumetric light sources give rise to so called soft shadows, i.e., there is a smooth transition from no shadow to full shadow. For hard shadows, which are generated by point light sources, the transition is abrupt. Since all real light sources occupy an area or volume, soft shadows are more realistic than hard shadows. Fast rendering of soft shadows, preferably in real time, has been a subject for research for decades, but so far this has mostly been an unsolved problem.

Therefore, this thesis, which is based on five papers, focuses on how to achieve real-time rendering of soft shadows. The first four papers constitute the foundation and evolution of a new algorithm, called the soft shadow volume algorithm, and the fifth paper provides an essential proof for correctness and generality of this and some previous shadow algorithms. The algorithm augments and extends the well-known shadow volume algorithm for hard shadows. Two passes are used, where the first pass consist of the classic shadow volume algorithm to generate the hard shadows (umbra). The second pass compensates to provide the softness (penumbra). This is done by generating penumbra wedges and rasterizing them using a custom pixel shader that for each rasterized pixel projects the hard shadow quadrilaterals onto the light source and computes the covered area.

A result of the thesis is an algorithm capable of real-time soft shadows that utilizes programmable graphics hardware. The algorithm produce high-quality shadows for area light sources and volumetric light sources. It also handles textured light sources, which currently is a very rare capability among realtime soft shadow algorithms. Even video textures are allowed as light sources.

Keywords: Computer Graphics, Three-Dimensional Graphics and Realism, Shading, Shadowing, Soft Shadows, Graphics Hardware, Pixel Shaders.


September 29th, 2003, 13:00

Public defence of PhD thesis by Björn Andersson, Computer Engineering, Chalmers University of Technology:

Static-Priority Scheduling on Multiprocessors

Room: HC3, Hörsalsvägen 16, Chalmers, Göteborg

Faculty opponent: Professor Aloysius K. Mok, Department of Computer Sciences, University of Texas at Austin


This thesis deals with the problem of scheduling a set of tasks to meet deadlines on a computer with multiple processors. Static-priority scheduling is considered, that is, a task is assigned a priority number that never changes and at every moment the highest-priority tasks that request to be executed are selected for execution.

The performance metric used is the capacity that tasks can request with- out missing a deadline. It is shown that every static-priority algorithm can miss deadlines although close to 50% of the capacity is requested. The new algorithms in this thesis have the following performance. In periodic scheduling, the capacity that can be requested without missing a deadline is: 33% for migrative scheduling and 50% for non-migrative scheduling. In aperiodic scheduling, many performance metrics have been used in previous research. With the aperiodic model used in this thesis, the new algorithms in this thesis have the following performance. The capacity that can be requested without missing a deadline is: 50% for migrative scheduling and 31% for non-migrative scheduling.

Keywords: real-time systems, real-time scheduling, multiprocessors, multi- processor scheduling, static-priority scheduling, global scheduling, partitioned scheduling, periodic, aperiodic, online scheduling.


19 september 2003, kl. 10:15

Public defence of PhD thesis by Tuomo Takkula, Computing Science, Chalmers University of Technology:

Aspects of Duality in Integer Programming

Room: Hörsalen, Matematiskt centrum, Eklandagatan 86, Chalmers, Göteborg

Faculty opponent: Professor Jörgen Tind, Institute for Mathematical Sciences, University of Copenhagen


This thesis deals with various problems arising when dualizing integer programs and combinatorial optimization problems. On the one hand, the corresponding dual functions are piecewise linear, calling for approaches from nondifferentiable optimization; on the other hand, the problems have special structures which can be exploited.

Line searches along the dual function of integer programs are an example of when the latter is the case. We introduce efficient specialized line search procedures for the dual functions of binary programs, the Lagrangian-surrogate dual of the TSP and the p-median problem, and for the dual function of the shortest path problem with a knapsack constraint.

A specialized line search is also used for a conditional epsilon-ascent bundle method for the dual function of binary programs. There the line search has a different role than before: instead of finding the maximum of a one-dimensional function, the line search must deliver a conditional epsilon-subgradient. We show how this can be done efficiently and exactly.

We employ duality theory to the satisfiability problem (SAT) in order to prove unsatisfiability. In order to do this, the SAT problem is converted into an equivalent binary program modeling the "minimum unsatisfiable clauses" problem. We approach this problem by dualizing the problem, yielding a lower bound on the number of unsatisfiable clauses. If this value is strictly positive, then the instance is nonsatisfiable. Two dualizations are tested, namely the Lagrangian dual (with very moderate success) and the split dual from Lagrangian decomposition (variable splitting), with encouraging

In a final note we discuss possibilities to extend the usual notion
of piecewise linear in order to capture functions which "look" piecewise linear, but wouldn't fit the usual definition. We comment on certain difficulties one encounters when doing this.


September 5th, 2003, 13:15

Public defence of PhD thesis by Håkan Forsberg, Computer Engineering, Chalmers University of Technology:

Optical Interconnection Architectures for High-Performance Computers and Switches

Room: HC2, Hörsalsvägen 16, Chalmers, Göteborg

Faculty Opponent: Professor Ted Szymanski, McMaster University, Canada


The incredible growth in processing power and switching capacity in highperformance computers and switches puts very high demands on the interconnection networks. This is recognized in particular in massively parallel processing and extreme switching architectures. The demands are such that new interconnection architectures must be considered. In this thesis, we give close consideration to the characteristics of both optics and electronics in order to find new interconnection architectures. We also explore how optical interconnections can provide a new design space for developers of parallel computer systems. We believe that planar integrated free-space optics technology is a strong candidate for implementing the kind of interconnection networks considered necessary for the applications studied in this thesis. We therefore base our work on new interconnection architectures on this technology. Our approach is to try to fit algorithms, architectures and physical implementations to each other to offer systems with high performance. The thesis gives important input in the form of examples to show the advantages of using optical interconnections in the designs of future highperformance embedded signal processing systems and switches and routers. We show how the use of free-space optical interconnections makes it possible to implement heavily cross-connected and complicated network topologies suitable for embedded signal processing. We also demonstrate that totally different switch fabric architectures can make use of optical technologies to facilitate the design of deeply cross-connected high capacity switches. The results show that several expected requirements of future embedded signal processing systems and terabit switches and routers can be met using optical solutions over short distances, e.g. the chip-to-chip or board-to-board level.

Keywords: Optical interconnections, high-performance computers, switches and routers, embedded signal processing, planar integrated freespace optics, hypercube topology, switch fabrics.

June 12th, 2003, 10:00

Public defence of PhD thesis by Yi Zhang, Computing Science, Chalmers University of Technology:

Non-blocking Synchronization: Algorithms and Performance Evaluation

Room: Hörsalen, Matematiskt Centrum, Eklandagatan 86, Göteborg.


The thesis investigates non-blocking synchronization in shared memory systems, in particular in high performance shared memory multiprocessors and real-time shared memory systems. We explore the performance impact of non-blocking synchronization in high performance shared memory multiprocessors and the applicability of non-blocking synchronization in real-time systems.

The performance advantage of non-blocking synchronization over mutual exclusion in shared memory multiprocessors has been advocated by the theory community for a long time. In this work, we try to make non-blocking synchronization appreciated by application designers and programmers through a sequence of results. First, we develop a non-blocking FIFO queue algorithm which is simple and can be used as a building block for applications and libraries. The algorithm is fast and scales very well in both symmetric and non-symmetric shared memory multiprocessors. Second, we implement a fine-grain parallel Quicksort using non-blocking synchronization. Although fine-grain parallelism has been thought to be inefficient for computations like sorting due to synchronization overhead, we show that efficiency can be achieved by incorporating non-blocking techniques for sharing data and computation tasks in the design and implementation of the algorithm. Finally, we investigate how performance and speedup of applications would be affected by using non-blocking rather than blocking synchronization in parallel systems. We show that for many applications, non-blocking synchronization leads to significant speedup for a fairly large number of processors, while they never slow the applications down.

Predictability is the dominant factor in performance matrices of real-time systems and a necessary requirement for non-blocking synchronization in real-time multiprocessors. In this thesis, we propose two non-blocking data structures with predictable behavior and present an inter-process coordination protocol that bounds the execution time of lock-free shared data object operations in real-time shared memory multiprocessors. The first data structure is a non-blocking buffer for real-time multiprocessors. The buffer gives a way to concurrent real-time tasks to read and write shared data and allows multiple write operations and multiple read operations to be executed concurrently and has a predictable behavior. Another data structure is a special wait-free queue for real-time systems. We present efficient algorithmic implementations for the queue.

These queue implementations can be used to enable communication between real-time tasks and non-real-time tasks in systems. The inter-process protocol presented is a general protocol which gives predictable behavior to any lock-free data structure in real-time multiprocessors. The protocol works for the lock-free implementations in real-time multiprocessor systems in the same way as the multiprocessor priority ceiling protocol (MPCP) works for mutual exclusion in real-time multiprocessors. With the new protocol, the worst case execution time of accessing a lock-free shared data object can be bounded.


June 10th, 2003, 13:00

Public defence of PhD thesis by Kristina Forsberg, Computer Engineering, Chalmers University of Technology:

Design Principles of Fly-By-Wire Architectures

Room: HC2, Hörsalsvägen 14, Göteborg


Development of embedded real-time systems is highly specialized and time consuming. For instance, Control-By-Wire Systems will continue to face increased requirements on flexibility, scalability, low weight, predictability and testability, at reduced complexity and maintenance costs. Further, these control systems are safety critical and require fault tolerance. Expectations and increased requirements on future embedded control systems necessitate methods and guidelines for how to progress in dependable system design.

In this research, design parameters and trade-offs in designing future fault-tolerant control systems are analyzed and elaborated upon for fly-by-wire architectures. Intelligent sensors and smart actuators will constitute the main building blocks of future flight control systems. The control should be distributed among these sensor and actuator nodes to make best use of the system capacity. The study shows that there is no need for a specialized dedicated control computer.

Various design issues regarding distributed architectures are theoretically analyzed, including control and computing allocation between nodes, physical distribution, hardware redundancy, communication scheduling and fault handling. On the basis of these findings, a robust fault-tolerant distributed architecture that could be designed ten years from now is defined. The study is based on control laws and known characteristics of the present Flight Control System, FCS, of JAS 39 Gripen. Further, real time performance and dependability features of the distributed FCS are simulated and findings are validated using a computerized model developed at Saab.

The structure and lessons learned during the process of how to design next generation FCS are presented and assembled into a top-down method for the conceptual design phase of general architectures for control-by-wire systems. Our cost efficient design philosophy includes an application oriented design process to utilize intrinsic redundancy, hardware redundancy only to tolerate physical permanent faults, and distribute functions so that the inter node communication is minimized.

Keywords: Distributed control, systems design, flight control systems, fault tolerance, redundancy management, replica consistency, reliable inter-node communication.

June 5th, 2003, 13:00

Public defence of PhD thesis by Mathias Johanson, Computer Engineering, Chalmers University of Technology:

Supporting Video-mediated Communication over the Internet

Room: HA2, Hörsalsvägen 4, Göteborg


The tremendous success of the Internet in providing a global communication infrastructure for a wide variety of applications has inspired the invention of packet video systems for synchronous interpersonal communication. Since the Internet was originally designed as a data communication network, primarily supporting asynchronous applications like file transfer and electronic mail, the realization of video-mediated communication systems based on Internet technology presents considerable technological challenges. Specifically, the best-effort service model of the Internet, that does not guarantee timely delivery of packets, implies that video applications must be resilient to packet loss and adaptive to variations in bandwidth and delay.

A primary concern when designing a framework for Internet-based video-mediated communication is how to make the systems scalable to a large number of widely distributed users. Another fundamental issue is how to support video-mediated communication in highly heterogeneous environments. Since the Internet is built upon network connections of widely different capacities and the computers connected to the network have vastly different characteristics, video applications must be adaptive to diverse and dynamic conditions. Furthermore, videomediated communication systems must take various application-specific requirements and usability issues into consideration.

This thesis contributes to the realization of a flexible framework for video-mediated communication over the Internet by presenting scalable and adaptive algorithms for multicast flow control, layered video coding and robust transport of video. Enrichments of video-mediated communication, in the shape of stereoscopic video transmission mechanisms and mobility support, are proposed along with design and implementation guidelines. Furthermore, the scope of Internet video is broadened by the introduction of a video gateway interconnecting multicast videoconferences with the World Wide Web.

In addition to the contributions on core technology, the thesis also treats applications of video-mediated communication. Specifically, the use of video for distributed collaborative teamwork is explored through experiments with prototype implementations.

Keywords: Video-mediated communication, teleconferencing, Internet video, layered multicast, congestion control, layered video coding, robust video transmission, video gateways, stereoscopic video, distributed collaborative work

June 4th, 2003, 13:15

Public defence of PhD thesis by Örjan Askerdal, Computer Engineering, Chalmers University of Technology:

On Impact and Tolerance of Data Errors with Varied Duration in Microprocessors

Room: HC1, Hörsalsvägen 14, Göteborg


The evolution of high-performance and low-cost microprocessors has led to their almost pervasive usage in embedded systems such as automotive electronics, smart gadgets, communication devices, etc. These mass-market products, when deployed in safety-critical systems, require safe services albeit at low recurring costs. Moreover, as these systems often operate in harsh environments, faults will occur during system operation, and thus, must be handled safely, i.e., tolerated.

This thesis investigates the efficiency of adding software-implemented fault tolerance techniques to commercial off-the-shelf (COTS) microprocessors. Specifically, the following problems are addressed:

· Which faults need to be tolerated considering the architecture, implementation and operational environments for COTS processors?
· Which software-implemented fault-tolerance techniques are effective and efficient to use?
· How can the efficiencies of such designs be evaluated?

The main contribution of this thesis is the development of novel approaches for estimating the effects of data errors with varied duration, and for ascertaining the efficiency of applied fault-tolerance techniques. These approaches are based on identifying the characteristics that determine which effects data errors will have on the system. Then these characteristics can be varied at a high abstraction level and the effects observed. The first approach is based on response analysis methods for understanding the effects of data errors on control systems. The second is a VHDL simulation-based fault injection method, based on insertion of specific components (so-called saboteurs) for varying the characteristics. As most system development processes start at a high abstraction level, we expect our approaches to be applied early in the process, and be a useful complement to traditional post-design assessment approaches such as fault-injection.

Keywords: error effect analysis, error propagation analysis, fault injection, fault tolerance, dependability

25 March 2003, 13:00

Public defence of PhD thesis by Göran Falkman, Computing Science, Chalmers University of Technology:

Issues in Structured Knowledge Representation: A Definitional Approach with Application to Case-Based Reasoning and Medical Informatics

Room: Hörsalen, Matematiskt centrum, Eklandagatan 86, Göteborg


Several issues concerned with structured knowledge representation based on definitional structures are discussed: the realisation of knowledge-based systems using declarative programming, similarity assessment in knowledge representation and case-based reasoning, and the importance of human-computer interaction and information visualisation in knowledge-based systems. To illustrate the basic ideas, real-world applications from the area of oral medicine are used.

We present MedView, a project aiming at developing models, methods and tools to support clinicians within oral medicine in their everyday work and research. The definitional model used in MedView constitutes the main governing principle in the project, not only in the formalisation of clinical terms and concepts, but in visualisation models and in the design and implementation of individual tools and the system as a whole as well.

In the context of case-based reasoning, a novel similarity framework based on definitional structures is presented. In this framework, a uniform definitional model is used for both case representation, similarity assessment and adaptation. The similarity model is general enough to capture many different types of similarity measures, e.g., ordinal, cardinal and asymmetric measures.

We also describe how a uniform declarative model can be used as the basis for an interaction model and two information visualisation tools in the area of oral medicine. We describe how known visualisation techniques to a large extent can be modeled using the conceptual model of the underlying data, enabling close interaction with the user and tight coupling between two different visualisation tools.


21 February 2003, 10:00

Public defence of PhD thesis by Anders Hansson, Transmission Theory, Chalmers University of Technology:

Generalized APP Detection for Communication over Unknown Time-Dispersive Waveform Channels

Room: HC1, Hörsalsvägen 14, Göteborg


The principle of transmitting time-equidistant pulses to carry discrete information is fundamental. When the pulses overlap, intersymbol interference occurs. Maximum-likelihood sequence detection of such signals observed in additive white Gaussian noise (AWGN) was known in the early 1970s. Due to distortion and multipath propagation, it is less artificial to assume that the received pulse shape is unknown to the receiver designer, and in this thesis, the channel is modeled as an unknown (and time-dispersive) linear filter with AWGN. First, we discuss how the conventional optimal front-end (based on the notion of a sufficient statistic and matched filtering) is inappropriate in this context. We revisit continuous time in order to derive an equivalent vector channel, and an alternative optimality criterion is reviewed. Moreover, we present an optimal sequence detector that performs joint estimation/detection by employing the generalized maximum-likelihood technique, and it is seen how such a detector relies on an exhaustive tree search. Pruning the optimal search tree leads to a suboptimal complexity-constrained algorithm, where only a subset of all sequences are evaluated as candidates. These elementary ideas are subsequently extended to the case of blind (or semiblind) soft decision detection, which also incorporates the concept of bi-directional estimation. The soft decisions are generated in the form of approximate a posteriori probabilities (APPs), and their soundness is evaluated by considering iterative detection of interleaved serially concatenated codes.


4 February 2003, 10:15

Public defence of PhD thesis by Stanislav Belenki, Computer Engineering, Chalmers University of Technology:

On Connection Admission Control in Computer Networks

Room: HC3, Hörsalsvägen 14, Göteborg


The best-effort nature of the Internet is a result of a relative simplicity of the network. This nature also allows the Internet to achieve high utilization of its resources. However, all these benefits are traded for inability of the Internet to provide a guaranteed or predictive quality of service (QoS). Therefore, the global computer network cannot reliably support real-time multimedia applications. It is a great challenge to turn the Internet into a QoS-capable communication network. Clearly, a QoS-capable Internet must remain simple and efficient.

A QoS-capable network must contain three components: packet classification, flow isolation, and connection admission control (CAC). CAC performs admission decisions on connections generated by individual real-time network applications. A new connection is admitted to the network only if the admission does not deteriorate the QoS of connections that are already in progress and while delivering the target QoS to the new connection. Besides, CAC must ensure that maintaining QoS of connections in progress does not come at the cost of a low utilization of network resources. Efficient implementations of packet classification and flow isolation are widely available in contemporary network equipment. However, a simple admission control that would efficiently utilize network resources while having good scalability properties is still a challenge for the Internet research community.

In this thesis the author presents a Heuristic-based Admission Control algorithm (HAC). Among properties of HAC is independence from the network infrastructure. This means that the algorithm does not require any additional protocols (e.g. ATM or RSVP). While being a per-hop CAC algorithm, complexity of the admission decision in HAC is very low. The algorithm has explicit configuration of the target QoS parameters, i.e. packet loss rate and queuing delay. In addition, HAC is capable of efficient self-tuning of its internal control parameters. Thanks to the fact that the algorithm is ignorant to traffic parameters of individual connections, it does not discriminate connections with certain values of traffic parameters as many CAC algorithms do. This was confirmed by simulation-based performance evaluation of the algorithm. The evaluation also revealed a good match between the target level of QoS and the actual level of QoS, together with high utilization of network capacity.

Keywords: admission control, quality of service, packet-switched networks, the Internet.

Licentiate seminars 2003

November 14th 2003, 13:15

Licentiate seminar by Dan Gorton (Andersson), Computer Engineering, Chalmers University of Technology:

Extending Intrusion Detection with Alert Correlation and Intrusion Tolerance

Room: EL42, EDIT-huset, Rännvägen 6/Hörsalsvägen 11, Chalmers

Discussion leader: Dr. Ulf Lindqvist, SRI International, Menlo Park, California


Intrusion detection is an important security tool. It has the possibility to provide valuable information about the current status of security. However, as enterprises deploy multiple intrusion detection sensors at key points in their networks, the issue of correlating messages from these sensors becomes increasingly important. A correlation capability reduces alert volume, and potentially improves detection performance through sensor reinforcement or complementarity. Correlation is especially advantageous when heterogeneous sensors are employed because of the potential to aggregate different views of the same incident.

This thesis studies a number of different properties of intrusion alert correlation, such as standard formats and similarity metrics. A conceptual framework is suggested, followed by three different case studies. Firstly, a router based IDS is implemented and analyzed. The quality of the event source is found to be unreliable and the consequences for intrusion detection and correlation are evaluated. Secondly, a case study of live traffic analysis is performed using heterogeneous intrusion alert correlation. A successful correlation is presented. Thirdly, the possibility to implement intrusion alert correlation using open source tools is evaluated. A simple prototype is implemented.

However, even if the performance of the intrusion detection systems increases, there will always be intrusions. One way to remedy this problem is to use fault tolerant techniques in a new setting, providing intrusion tolerance. This results in a system that will continue to provide a valid service, possibly with a performance penalty, in spite of current intrusion attempts. This thesis studies different ways to implement intrusion tolerant services. Additionally, an intrusion tolerant reverse proxy server is implemented and analyzed.

All in all, we show the value of intrusion alert correlation and intrusion tolerance in different settings.

Keywords: computer security, vulnerability, intrusion, intrusion detection, intrusion alert correlation, intrusion tolerance


November 14th 2003, 10:15

Licentiate seminar by Niklas Sörensson, Computing Science, Chalmers University of Technology:

Applications of SAT solving

Room: MD9, Matematiskt centrum, Eklandagatan 86, Chalmers

Discussion leader: PD Dr. Peter Baumgartner, Universität Koblenz-Landau, Institut für Informatik


In the area of formal verification it is well known that there can be no single logic that suits all needs. This insight motivates the diversity of this dissertation: it contains contributions to SAT solving, First Order theorem proving and Model Finding, and Symbolic Model Checking.

A growing number of problem domains are successfully being tackled by SAT solvers. Following the current trend of extending and adapting SAT solvers we present a detailed description of a SAT solver designed for that particular purpose. The description bridges a gap between theory and practice, serving as a tutorial on modern SAT solving algorithms. Among other things we describe how to solve a series of related SAT problems efficiently, called incremental SAT solving.

For finding finite first order models, the MACE-style method that is based on SAT solving, is well-known. We improve the basic method by several techniques, that can be loosely classified as either transformations that make the reduction to SAT result in fewer clauses, or techniques that are designed to speed up the search of the SAT solver. The resulting tool, called Paradox, performed well in the SAT division of the CASC-19 competition. Recently, there has been large interest in methods for safety property verification that are based on SAT solving. One example is temporal induction, also called k-induction. The method requires a sequence of increasingly stronger induction proofs to be performed.

We show how this sequence of proofs can be solved more efficiently
using incremental SAT solving. Constraint merging tableaux maintain a system of all closing substitutions of all subtableau up to a certain depth, which is incrementally increased. This avoids backtracking as necessary in destructive first order free variable tableaux. We analyse the reasons why lazy functional implementations so far have been problematic, and we give a solution. The resulting implementation in Haskell is compact and modular.


November 13th 2003, 13:15

Licentiate seminar by Magnus Björk, Computing Science, Chalmers University of Technology:

Stålmarck's Method for Automated Theorem Proving in First Order Logic

Room: MD9, Matematiskt centrum, Eklandagatan 86, Chalmers

Discussion leader: Ulrich Furbach, Universität Koblenz, Institut für Informatik


We present an extension of Stålmarck's method to classical first order predicate logic. Stålmarck's method is a satisfiability checking method for propositional logic, and it resembles tableaux and KE. Its most distinctive feature is the dilemma rule, which is an extended branching rule, that allows branches to be recombined. This is done by keeping the intersection of all derived consequences in the two branches after the merge. Stålmarck's method for propositional logic is known to be sound and complete, and has turned out to be efficient in industrial applications.

The extension of Stålmarck's method uses a tableaux-like calculus with both universal and rigid variables. We define a notion of logical intersections, in which the pairwise unifiable formulas from two sets of formulas are found. We also describe how rigid variables can be transformed into universal variables, when the two branches of a dilemma rule are merged. We define the proof system rigorously, and prove that it is sound. We also prove the specialized derivations theorem, which states that instantiating a rigid variable in all steps of a proof preserves proof correctness. We then outline a proof procedure, and discuss some of its features. A prototype implementing parts of the proof procedure has been made.


October 13th, 2003, 10:15

Licenciate seminar by Niklas Eén, Computing Science, Chalmers University of Technology:

SAT-Based Model Checking

Room: MD3, Matematiskt centrum, Eklandagatan 86. Chalmers, Göteborg


In this Thesis we study automatic reasoning about finite state machines (FSMs). Two such techniques, with particular applicability to hardware verification, are presented. In both techniques, the verification is carried out by a translation of the problem into propositional logic. Satisfiability and validity of propositional formulas are decided by the use of a SAT-solver. For this reason, the fundamental techniques of a modern SAT-solver are also presented. The material falls into the category of symbolic model checking (SMC). By model checking, we refer to fully automatic techniques of verifying temporal properties of systems. Such techniques are frequently applied in industry. If the state-space of the system is *not* represented explicitly (by a graph, for instance), we call the approach symbolic. Conventional SMC is carried out by using binary decision diagrams (BDDs), a canonical representation of boolean functions, to compute and represent subsets of the state-space. Although algorithms based on BDDs have been successful in many applications, there are limitations that cannot easily be alleviated. We seek to replace BDDs by SAT-solvers in the hope of lifting some of those limitations. Many SMC-tools allow properties to be specified in a temporal logic. We focus entirely on proving safety properties; properties that are true in all states reachable from an initial state of the system.

The first paper of the Thesis shows how reachability analysis (computing a symbolic representation of the reachable states) can be performed in much the same way as for BDDs, using a non-canonical representation of boolean functions. The method includes a translation from quantified boolean formulas (QBFs) to propositional formulas, and the use of a SAT-solver for termination checks.

The second paper of the Thesis shows how safety properties can be proven by means of temporal induction (also known as k-induction). Several improvements are made to previous techniques, in particular by the introduction and novel use of an incremental SAT-solver. The performance gain is documented by thorough testing.

The third and final paper of the Thesis documents in detail how a modern SAT-solver is constructed and suggests some extensions. It shows how arbitrary boolean constraints can be added to a SAT-solver, and also implements an incremental SAT-interface.


September 26th, 2003, 13:15

Licenciate seminar by Dhammika Bokolamulla, Computer Engineering, Chalmers University of Technology:

Reduced ComplexityTechniques for Iterative Decoding

Room: EL41, Linsen, Hörsalsvägen 11, Chalmers, Göteborg


Despite the impressive performance of concatenated coding and iterative decoding, one main concern in the implementation of such systems is their complexity. For some applications, the full complexity iterative decoder is prohibitively complex. It has been an interesting and important open question to find a larger class of decoding algorithms for concatenated codes that allows one to trade o® performance with complexity, in a larger range. The thesis discusses some general ideas to provide a conceptual understanding to address this problem. First, the fundamental concept of approximating a posteriori probabilities (APPs) using a subset of possible sequences is explored. Finding such sequences is done by a limited bi-directional trellis search algorithm. The reduced search algorithm could operate in a smaller trellis (fewer states compared with the standard iterative decoder), resulting in a reduction in decoder’s complexity. Also presented is a simpler complexity reduction strategy, based on trellis pruning. Since the complexity of an iterative decoder is proportional to the size of its trellis diagram, pruning results in a simple decoder. The criterion for pruning is the removal of branches when the a priori values associated with them fall below a certain threshold value. The e®ectiveness of the algorithms is illustrated in terms of bit error rate (BER) performance and extrinsic information transfer chart (EXIT) analysis. According to the numerical examples, both these approaches could achieve a substantial reduction in decoding complexity, for a negligible degradation in the performance.

Keywords: Iterative detection, turbo codes, serially concatenated codes, trellis complexity, intersymbol interference, soft information, stopping criteria, trellis pruning, symbol-by-symbol MAP detection, reduced-complexity decoding.


September 19th, 2003, 13:15

Licenciate seminar by Zihuai Lin, Computer Engineering, Chalmers University of Technology:

Joint Source and Channel Coding Using Combined TCO and CPM Schemes

Room: EL41, Linsen, Hörsalsvägen 11, Chalmers, Göteborg


Joint source and channel coding using combined Trellis Coded Quantization (TCQ) and Continuous Phase Modulation (CPM) has been studied. TCQ has shown to offer improved source coding performance (when compared to Lloyd-Max quantization) while additionally offering some error immunity. On the other hand, CPM saves bandwidth compared to memoryless modulation methods by ensuring that the phase change is continuous between adjacent channel symbols. The design goal of the combined TCQ/CPM system is to create a unified system that incorporates all the performance benefits realized by the separate subsystems. Using a combined source and channel coding scheme in conjunction with a CPM system, the benefits of the power/bandwidth efficiencies of trellis code and the CPM scheme can be realized.

The optimal soft decoding algorithm for the joint TCQ and trellis coded CPM scheme is first derived and the performance in terms of the SQNR is compared with the one for the joint TCQ/CPM system with hard decoding. The simulation results show that large improvement can be achieved for the system with soft decoding especially at low to medium SNR. An iterative decoding algorithm for serially concatenated TCQ CPM system has also been proposed. It has been demonstrated that the combined TCQ/CPM system simultaneously realizes a performance improvement and power savings over a QPSK system which uses separately optimized subsystems. Furthermore, TCQ/CPM realize a bandwidth reduction over memoryless modulators.


September 18th, 2003, 14:15

Licenciate seminar by Jochen Hollmann, Computer Engineering, Chalmers University of Technology:

Latency Reduction and Tolerance in Distributed Digital Libraries

Room: Lecture hall EC, Hörsalsvägen 11, Chalmers, Göteborg


Caching and prefetching are well known to reduce and tolerate latency in distributed systems such as the World Wide Web (WWW). In the scientific world those techniques could be used for digital libraries, where latency is experienced in the retrieval of publications from publishers.

This thesis is an investigation into the prospects of applying caching and prefetching in the context of distributed digital libraries. The focus is on document access from remote fulltext repositories, because latency caused during the search process can be effectively handled by keeping bibliographic data in a local database. The main question to be reflected on is how caching and prefetching can be adapted to be effective in reducing or tolerating latency in distributed digital libraries containing research articles.

In a feasibility study using user activity traces collected from a production system, I found that both caching and prefetching may be used to reduce the latency. A small percentage of journals experienced most of the accesses, which could guide preloading strategies of such journals. However the larger potential was shown for prefetching during the search and selection process. During the time the user views a bibliographic record it should be possible to transfer the associated document in most cases. Such a scheme would transfer about twice as many documents than without prefetching.

The results were verified by implementing a prototype system at Chalmers, which was available to the local research community. I found that with a simple, low cost setup I reduced the average latency to 50% of the original value. To achieve this all articles were transfered immediately when the user requested the bibliographic record. This resulted in less than twice the article transfers than without prefetching.

Investigation on caching based on the user activity traces from the production system showed that it only works well for reuse by the same user, which can be handled on the client machines effectively. Re-accesses by more than one person are very rare. I observed extremely low access frequencies and many documents being accessed only once. This also revealed that the observation time frame of two years that I used was too short to see all effects. Hence my research suggests that client machines keep all accessed articles, while new articles are handled by prefetching at a common gateway.


September 12th, 2003, 14:00

Licenciate seminar by Magnus Ekman, Computer Engineering, Chalmers University of Technology:

Performance and Energy Aware Design Trade-Offs in Chip-Multiprocessors

Room: EL41, Linsen, Hörsalsvägen 11, Chalmers, Göteborg


Increased integration and improved manufacturing techniques have made it possible to place multiple processor cores on the same die, in order to build chip-multiprocessors (CMPs). This is advantageous from several points of view, such as more performance gains than can be achieved by making one single core more wide-issue, and faster and easier design due to replication of cores. However, it also introduces new architectural trade-offs, such as how to manage the more complex on-chip memory system, and whether the transistors should be spent on a few complex cores, or on numerous simple ones. This thesis addresses these two specific issues.

The thesis shows that traditional snoopy cache protocols are not very efficient with respect to power and energy. In particular, previously proposed techniques to reduce snooping induced energy aimed at multiprocessor servers do not work well in a CMP environment. Further, in order to reduce the power consumption in the translation lookaside buffers (TLBs), virtually addressed caches are studied. It is shown that while this does decrease the power consumption in the TLBs, it can increase the power consumption in the snoopy cache protocol. In order to solve this, two new techniques to cut down on the power consumption are introduced. Finally, the thesis also considers the tradeoff between the number of cores and their issue-width in a CMP. It is shown that allowing up to four instructions to be issued each cycle close to optimal multi-thread performance can be achieved without trading off too much singlethread performance for the applications that are studied.

Keywords: chip-multiprocessors, energy-efficiency, caches, multiple-issue processors, performance evaluation, cache coherence


June 6th, 2003, 13:15

Licenciate seminar by Fredrik Warg, Computer Engineering, Chalmers University of Technology:

Module-Level Speculative Execution Techniques on Chip Multiprocessors

Room: EL41, Linsen 4th floor, Hörsalsvägen 11, Chalmers, Göteborg


Thread-level parallelism is an increasingly popular target for improving computer system performance; architectures such as chip multiprocessors and multithreaded cores are designed to take advantage of parallel threads within a single chip. The performance of existing single-threaded programs can be improved with automatic parallelization and thread-level speculation; however, overheads associated with speculation can be a major hurdle towards achieving significant performance gains.

This thesis investigates module-level parallelism, or spawning a speculative thread running the module continuation in parallel with the called module. The results of simulations with an ideal speculative chip multiprocessor show that execution time can potentially be, on average, halved with module-level parallelism, but that misspeculations are common and that speculation overheads dominate the execution time if new threads are started for every module continuation.

In order to deal with the overhead, techniques for selectively starting new threads only when they are expected to yield useful parallelism are introduced. The first technique is aimed at removing small threads. It employs a lower threshold on module run-length; a new thread is spawned only if the run-length of the called module exceeds the threshold. In addition, run-length prediction is used to predict if the run-length will exceed the threshold. The chosen predictor typically achieves over 90% accuracy, and with this technique speculation overheads can be tolerated.

The second technique is misspeculation prediction and is aimed at bringing down total overhead. The selection of module calls used for speculation is based on whether or not spawning a new thread is expected to result in a misspeculation. When spawning threads for all continuations the total overhead is on average 336% extra cycles compared to sequential execution; with misspeculation prediction, the average overhead can be brought down to 54%.

The proposed techniques can be applied in run-time to speed up existing applications on chip multiprocessors with thread-level speculation support.


Chip multiprocessors, thread-level speculation, module-level parallelism, module run-length prediction, misspeculation prediction, value prediction.


May 27th, 2003, 10:15

Licenciate seminar by Hoai Hoang, Computer Engineering, Chalmers University of Technology:

Switched Real-Time Ethernet for Industrial Applications

Room: Wigforssalen, Kristian IV:s väg, Halmstad University


The research reported in this thesis has been focused on developing and analyzing how to support real-time traffic over a switched Ethernet network without any hardware or protocol modifications. The work has resulted in a proposed systems model, supporting both real-time and non real-time traffic. Currently this model is intended for a one-switch network, with no shared media. All added traffic handling to support real-time communication is positioned in a thin layer (RT layer) added between the Ethernet layer and the TCP/IP suite. This assures adaptation to the surrounding protocol standards. The RT layer manages traffic on the basis of virtual connections, denoted as RT channels, as well as packet level scheduling. RT channels are created between end-nodes prior to any occurrence of real-time traffic. Asymmetric deadline partitioning between the links of the RT channels is also proposed, in order to increase the number of possible RT channels.

Keywords: Switched Ethernet, Real-time communication, EDF scheduling, Industrial networks.


30 April 2003, 10:15

Licenciate seminar by Boris Koldehofe, Computing Science, Chalmers University of Techonolgy

Collaborative Environments: Aspects in Communication and Educational Visualisation

Room: MD8, Matematiskt centrum, Eklandagatan 86, Göteborg


This thesis examines several technical and educational aspects in collaborative environments. Collaborative environments help multiple users to share and modify data objects in real-time although the users themselves may not be present at the same physical location. The users of the collaborative environment form a group in which they can exchange information about the state and updates of shared objects. As existing applications in collaborative virtual environments show, supporting large scale collaborative environments and providing users with a notion of consistency on the shared objects is difficult.

The physical distance between users can be large and the state of objects can be complex. Thus there is only a limited possibility to transmit the shared state and propagate state changes between users. Moreover, objects can change as a function of time, i.e. objects can change their state even without any update involved. Another aspect, affecting the scalability of collaborative environments, is the organisation of membership for users and the structure which handles that users and objects become aware of each other.

This thesis examines on the one hand technical aspects at the communication level of such systems. On the other hand we present a possible application in educational visualisation of distributed systems.

The first part of the thesis focuses on algorithms, techniques and concepts needed to build collaborative environments which scale well and provide the users with a notion of consistency. In particular, we analyse aspects in
group communication that help to organise group membership which scales to many processes and is robust against faults in the communication traffic.

The second part of this thesis discusses educational visualisation and applications to collaborative systems in the context of LYDIAN, an
extensible educational animation library. Besides discussing aspects to build a general purpose visualisation environment, we evaluate a way to visualise a distributed concept by using collaboration.

14 March 2003, 10:15

Licenciate seminar by Kristofer Johannisson, Computing Science, Göteborg University

Computer Assisted Proofs and Specifications

Room: MD9, Matematiskt centrum, Eklandagatan 86, Göteborg


This thesis is concerned with two different lines of work, contained in two papers. The first presents a formal proof of the halting problem, the second describes a tool for the authoring of informal and formal software requirements specifications. Both works make use of constructive type theory, via the proof assistant Agda and the Grammatical Framework (GF), respectively.

The Agda proof in the first paper is based on formulating a number of axioms, from which the undecidability of the halting problem is proved. These axioms describe the properties a model of computation should satisfy for the result to hold. This makes the proof abstract, and means that we avoid getting into the details of representing a particular model of computation in Agda, such as Turing machines or lambda-calculus. The fact that the proof is carried out in a constructive logic does not require any major reformulation of the standard, classical proof.

The second paper describes foundations and design principles of a tool that supports authoring of informal and formal software requirements specifications simultaneously and from a single source. The tool is an attempt to bridge the gap between completely informal requirements specifications (as found in practice) and formal ones (as needed in formal methods). The user is supported by an interactive syntax-directed editor, parsers and linearizers. As a formal specification language we realize the Object Constraint Language, a substandard of the UML, on the informal side a fragment of English. The tool is intended for integration into the KeY system, which integrates object oriented design with formal methods. The implementation is based on the Grammatical Framework, a generic tool that combines linguistic and logical methods.


28 February 2003, 10:00

Licenciate seminar by Mattias Grönkvist, Computing Science, Chalmers University of Technology

Tail Assignment - A Combined Column Generation and Constraint
Programming Approach

Room: MD7, Matematiskt centrum, Eklandagatan 86, Göteborg


In today's volatile airline industry, well planned resource utilization is critical. One of the main resources for an airline are the aircraft, which makes is crucial to use these as efficiently as possible. We study the problem of planning which individual aircraft should operate which flight, so as to ensure efficient, safe and robust operation. This problem is called the Tail Assignment problem, as it assigns aircraft, identified by tail numbers, to flights.

The Tail Assignment problem handles individual operational constraints, such as maintenance constraints and curfews, while optimizing some objective. Often, the objective models maximization of robustness of operation, for example by maximizing the standby time of the aircraft. This provides opportunities for aircraft to be used as reserves in case of disruptions, thus providing increased robustness. Our approach can also model profit maximization, by considering the operating cost versus the capacity of the individual aircraft. We present mathematical models for the Tail Assignment problem, and suggest a solution procedure based on column generation. We show how integrating Constraint Programming techniques in different ways can increase the performance of the column generator. We further develop techniques to obtain solutions quickly, and to obtain near optimal solutions. Results are presented for real world Tail Assignment instances from European and American airlines.

24 January 2003, 10:15

Licenciate seminar by Wojchiech Mostowski, Computing Science, Chalmers University of Technology:

Towards Development of Safe and Secure JAVA CARD Applets

Room: MD1, Matematiskt centrum, Eklandagatan 86, Göteborg

This thesis is concerned with different aspects of Java Card application development and use of formal methods in the Java Card world. Java Card is a technology that provides means to program smart (chip) cards with (a subset of) the Java language. The use of formal methods in the Java Card context is highly justified due to the criticality of Java Card applications. First of all, Java Card applications are usually security critical (e.g., authentication, electronic cash), second, they are cost critical (i.e. they are distributed in large amounts making updates quite difficult) and finally, they can also be legally critical (e.g., when the digital signature law is considered). Thus the robustness and correctness of Java Card applications should be enforced by the best means possible, i.e. by the use of formal verification techniques. At the same time Java Card seems to be a good target for formal verification - due to the relative simplicity of Java Card applications (as compared to full Java), formal verification becomes a feasible and manageable task. In this thesis, we touch upon different levels of Java Card application development and the use of formal methods. We start by defining a UML/OCL (Unified Modelling Language, Object Constraint Language) supported development process specifically tailored to Java Card applications, then we go on to define an extension to the logic used in formal verification of Java Card programs to handle so called "rip out" properties (properties that should be maintained in case of an unexpected termination of a Java Card program), which are specific to Java Card. Finally, we end up with a simple tool support for Java Card program compilation and testing inside a CASE (Computer Aided Software Engineering) tool. The main purpose of this work is to ensure the robustness of Java Card applications by providing a well defined development process and means to formally verify properties specific to Java Card applications to be able to develop Java Card applets which are robust "by design". At the same time we want to make rigorous Java Card development relatively easy, tool supported (automated wherever possible) and usable by people that do not have years of training in formal methods.


Previous page

Seminars 2002


Webmaster: Catharina Jerkbrant
Last modified: 040203