Matching entries: 0
settings...
AuthorTitleYearJournal/ProceedingsReftypeLinks
Holst, S., Imhof, M.E. and Wunderlich, H.-J. High-Throughput Logic Timing Simulation on GPGPUs 2015 ACM Transactions on Design Automation of Electronic Systems (TODAES)
Vol. 20(3), pp. 1-22
article DOI URL PDF 
Keywords: Verification, Performance, Gate-Level Simulation, General Purpose computing on Graphics Processing Unit (GP-GPU), Hazards, Parallel CAD, Pin-to-Pin Delay, Pulse-Filtering, Timing Simulation
Abstract: Many EDA tasks like test set characterization or the precise estimation of power consumption, power droop and temperature development, require a very large number of time-aware gate-level logic simulations. Until now, such characterizations have been feasible only for rather small designs or with reduced precision due to the high computational demands.
The new simulation system presented here is able to accelerate such tasks by more than two orders of magnitude and provides for the first time fast and comprehensive timing simulations for industrial-sized designs. Hazards, pulse-filtering, and pin-to-pin delay are supported for the first time in a GPGPU accelerated simulator, and the system can easily be extended to even more realistic delay models and further applications.
A sophisticated mapping with efficient memory utilization and access patterns as well as minimal synchronizations and control flow divergence is able to use the full potential of GPGPU architectures. To provide such a mapping, we combine for the first time the versatility of event-based timing simulation and multidimensional parallelism used in GPU-based gate-level simulators. The result is a throughput-optimized timing simulation algorithm, which runs many simulation instances in parallel and at the same time fully exploits gate-parallelism within the circuit.
BibTeX:
@article{2015_TODAES_HolstIW2015,
  author = {Holst, Stefan and Imhof, Michael E. and Wunderlich, Hans-Joachim},
  title = {High-Throughput Logic Timing Simulation on GPGPUs},
  journal = {ACM Transactions on Design Automation of Electronic Systems (TODAES)},
  year = {2015},
  volume = {20},
  number = {3},
  pages = {1--22},
  keywords = {Verification, Performance, Gate-Level Simulation, General Purpose computing on Graphics Processing Unit (GP-GPU), Hazards, Parallel CAD, Pin-to-Pin Delay, Pulse-Filtering, Timing Simulation},
  abstract = {Many EDA tasks like test set characterization or the precise estimation of power consumption, power droop and temperature development, require a very large number of time-aware gate-level logic simulations. Until now, such characterizations have been feasible only for rather small designs or with reduced precision due to the high computational demands.
The new simulation system presented here is able to accelerate such tasks by more than two orders of magnitude and provides for the first time fast and comprehensive timing simulations for industrial-sized designs. Hazards, pulse-filtering, and pin-to-pin delay are supported for the first time in a GPGPU accelerated simulator, and the system can easily be extended to even more realistic delay models and further applications.
A sophisticated mapping with efficient memory utilization and access patterns as well as minimal synchronizations and control flow divergence is able to use the full potential of GPGPU architectures. To provide such a mapping, we combine for the first time the versatility of event-based timing simulation and multidimensional parallelism used in GPU-based gate-level simulators. The result is a throughput-optimized timing simulation algorithm, which runs many simulation instances in parallel and at the same time fully exploits gate-parallelism within the circuit.}, url = {http://dl.acm.org/citation.cfm?id=2714564}, doi = {http://dx.doi.org/10.1145/2714564}, file = {http://www.meimhof.de/publications/conference/2015_TODAES_HolstIW2015.pdf} }
Dalirsani, A., Hatami, N., Imhof, M.E., Eggenberger, M., Schley, G., Radetzki, M. and Wunderlich, H.-J. On Covering Structural Defects in NoCs by Functional Tests 2014 Proc. 23rd IEEE Asian Test Symposium (ATS), pp. 87-92 inproceedings DOI URL PDF 
Keywords: Network-on-Chip (NoC), Functional Test, Functional Failure Modeling, Fault Classification, Boolean Satisfiability (SAT)
Abstract: Structural tests provide high defect coverage by considering the low-level circuit details. Functional test provides a faster test with reduced test patterns and does not imply additional hardware overhead. However, it lacks a quantitative measure of structural fault coverage. This paper fills this gap by presenting a satisfiability based method to generate functional test patterns while considering structural faults. The method targets NoC switches and links, and it is independent of the switch structure and the network topology. It can be applied for any structural fault type as it relies on a generalized structural fault model.
BibTeX:
@inproceedings{2014_ATS_DalirsaniHIESRW2014,
  author = {Dalirsani, Atefe and Hatami, Nadereh and Imhof, Michael E. and Eggenberger, Marcus and Schley, Gert and Radetzki, Martin and Wunderlich, Hans-Joachim},
  title = {On Covering Structural Defects in NoCs by Functional Tests},
  booktitle = {Proc. 23rd IEEE Asian Test Symposium (ATS)},
  year = {2014},
  pages = {87--92},
  keywords = {Network-on-Chip (NoC), Functional Test, Functional Failure Modeling, Fault Classification, Boolean Satisfiability (SAT)},
  abstract = {Structural tests provide high defect coverage by considering the low-level circuit details. Functional test provides a faster test with reduced test patterns and does not imply additional hardware overhead. However, it lacks a quantitative measure of structural fault coverage. This paper fills this gap by presenting a satisfiability based method to generate functional test patterns while considering structural faults. The method targets NoC switches and links, and it is independent of the switch structure and the network topology. It can be applied for any structural fault type as it relies on a generalized structural fault model.},
  url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6979082},
  doi = {http://dx.doi.org/10.1109/ATS.2014.27},
  file = {http://www.meimhof.de/publications/conference/2014_ATS_DalirsaniHIESRW2014.pdf}
}
Zhang, H., Kochte, M.A., Imhof, M.E., Bauer, L., Wunderlich, H.-J. and Henkel, J. GUARD: GUAranteed Reliability in Dynamically Reconfigurable Systems 2014 Proc. 51st ACM/EDAC/IEEE Design Automation Conference (DAC), pp. 1-6, HiPEAC Paper Award inproceedings DOI URL PDF 
Abstract: Soft errors are a reliability threat for reconfigurable systems implemented with SRAM-based FPGAs. They can be handled through fault tolerance techniques like scrubbing and modular redundancy. However, selecting these techniques statically at design or compile time tends to be pessimistic and prohibits optimal adaptation to changing soft error rate at runtime.
We present the GUARD method which allows for autonomous runtime reliability management in reconfigurable architectures: Based on the error rate observed during runtime, the runtime system dynamically determines whether a computation should be executed by a hardened processor, or whether it should be accelerated by inherently less reliable reconfigurable hardware which can trade-off performance and reliability. GUARD is the first runtime system for reconfigurable architectures that guarantees a target reliability while optimizing the performance. This allows applications to dynamically chose the desired degree of reliability. Compared to related work with statically optimized fault tolerance techniques, GUARD provides up to 68.3% higher performance at the same target reliability.
BibTeX:
@inproceedings{2014_DAC_ZhangKIBWH2014,
  author = {Zhang, Hongyan and Kochte, Michael A. and Imhof, Michael E. and Bauer, Lars and Wunderlich, Hans-Joachim and Henkel, Jörg},
  title = {GUARD: GUAranteed Reliability in Dynamically Reconfigurable Systems},
  booktitle = {Proc. 51st ACM/EDAC/IEEE Design Automation Conference (DAC)},
  year = {2014},
  pages = {1--6},
  note = {HiPEAC Paper Award},
  abstract = {Soft errors are a reliability threat for reconfigurable systems implemented with SRAM-based FPGAs. They can be handled through fault tolerance techniques like scrubbing and modular redundancy. However, selecting these techniques statically at design or compile time tends to be pessimistic and prohibits optimal adaptation to changing soft error rate at runtime.
We present the GUARD method which allows for autonomous runtime reliability management in reconfigurable architectures: Based on the error rate observed during runtime, the runtime system dynamically determines whether a computation should be executed by a hardened processor, or whether it should be accelerated by inherently less reliable reconfigurable hardware which can trade-off performance and reliability. GUARD is the first runtime system for reconfigurable architectures that guarantees a target reliability while optimizing the performance. This allows applications to dynamically chose the desired degree of reliability. Compared to related work with statically optimized fault tolerance techniques, GUARD provides up to 68.3% higher performance at the same target reliability.}, url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6881359}, doi = {http://dx.doi.org/10.1145/2593069.2593146}, file = {http://www.meimhof.de/publications/conference/2014_DAC_ZhangKIBWH2014.pdf} }
Sauer, M., Polian, I., Imhof, M.E., Mumtaz, A., Schneider, E., Czutro, A., Wunderlich, H.-J. and Becker, B. Variation-Aware Deterministic ATPG 2014 Proc. 19th IEEE European Test Symposium (ETS), pp. 87-92, Best Paper Award inproceedings DOI URL PDF 
Keywords: Variation-aware test, fault efficiency, ATPG
Abstract: In technologies affected by variability, the detection status of a small-delay fault may vary among manufactured circuit instances. The same fault may be detected, missed or provably undetectable in different circuit instances. We introduce the first complete flow to accurately evaluate and systematically maximize the test quality under variability. As the number of possible circuit instances is infinite, we employ statistical analysis to obtain a test set that achieves a fault-efficiency target with an user-defined confidence level. The algorithm combines a classical path-oriented test-generation procedure with a novel waveformaccurate engine that can formally prove that a small-delay fault is not detectable and does not count towards fault efficiency. Extensive simulation results demonstrate the performance of the generated test sets for industrial circuits affected by uncorrelated and correlated variations.
BibTeX:
@inproceedings{2014_ETS_SauerPIMSCWB2014,
  author = {Sauer, Matthias and Polian, Ilia and Imhof, Michael E. and Mumtaz, Abdullah and Schneider, Eric and Czutro, Alexander and Wunderlich, Hans-Joachim and Becker, Bernd},
  title = {Variation-Aware Deterministic ATPG},
  booktitle = {Proc. 19th IEEE European Test Symposium (ETS)},
  year = {2014},
  pages = {87--92},
  note = {Best Paper Award},
  keywords = {Variation-aware test, fault efficiency, ATPG},
  abstract = {In technologies affected by variability, the detection status of a small-delay fault may vary among manufactured circuit instances. The same fault may be detected, missed or provably undetectable in different circuit instances. We introduce the first complete flow to accurately evaluate and systematically maximize the test quality under variability. As the number of possible circuit instances is infinite, we employ statistical analysis to obtain a test set that achieves a fault-efficiency target with an user-defined confidence level. The algorithm combines a classical path-oriented test-generation procedure with a novel waveformaccurate engine that can formally prove that a small-delay fault is not detectable and does not count towards fault efficiency. Extensive simulation results demonstrate the performance of the generated test sets for industrial circuits affected by uncorrelated and correlated variations.},
  url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6847806},
  doi = {http://dx.doi.org/10.1109/ETS.2014.6847806},
  file = {http://www.meimhof.de/publications/conference/2014_ETS_SauerPIMSCWB2014.pdf}
}
Dalirsani, A., Imhof, M.E. and Wunderlich, H.-J. Structural Software-Based Self-Test of Network-on-Chip 2014 Proc. 32nd IEEE VLSI Test Symposium (VTS), pp. 1-6 inproceedings DOI URL PDF 
Keywords: Network-on-Chip (NoC), Software-Based Self-Test (SBST), Automatic Test Pattern Generation (ATPG), Boolean Satisfiability (SAT)
Abstract: Software-Based Self-Test (SBST) is extended to the switches of complex Network-on-Chips (NoC). Test patterns for structural faults are turned into valid packets by using satisfiability (SAT) solvers. The test technique provides a high fault coverage for both manufacturing test and online test.
BibTeX:
@inproceedings{2014_VTS_DalirsaniIW2014,
  author = {Dalirsani, Atefe and Imhof, Michael E. and Wunderlich, Hans-Joachim},
  title = {Structural Software-Based Self-Test of Network-on-Chip},
  booktitle = {Proc. 32nd IEEE VLSI Test Symposium (VTS)},
  year = {2014},
  pages = {1--6},
  keywords = {Network-on-Chip (NoC), Software-Based Self-Test (SBST), Automatic Test Pattern Generation (ATPG), Boolean Satisfiability (SAT)},
  abstract = {Software-Based Self-Test (SBST) is extended to the switches of complex Network-on-Chips (NoC). Test patterns for structural faults are turned into valid packets by using satisfiability (SAT) solvers. The test technique provides a high fault coverage for both manufacturing test and online test.},
  url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6818754},
  doi = {http://dx.doi.org/10.1109/VTS.2014.6818754},
  file = {http://www.meimhof.de/publications/conference/2014_VTS_DalirsaniIW2014.pdf}
}
Imhof, M.E. and Wunderlich, H.-J. Bit-Flipping Scan - A Unified Architecture for Fault Tolerance and Offline Test 2014 Proc. Design, Automation and Test in Europe (DATE), pp. 1-6 inproceedings DOI URL PDF 
Keywords: Bit-Flipping Scan, Fault Tolerance, Test, Compaction, ATPG, Satisfiability
Abstract: Test is an essential task since the early days of digital circuits. Every produced chip undergoes at least a production test supported by on-chip test infrastructure to reduce test cost. Throughout the technology evolution fault tolerance gained importance and is now necessary in many applications to mitigate soft errors threatening consistent operation. While a variety of effective solutions exists to tackle both areas, test and fault tolerance are often implemented orthogonally, and hence do not exploit the potential synergies of a combined solution.
The unified architecture presented here facilitates fault tolerance and test by combining a checksum of the sequential state with the ability to flip arbitrary bits. Experimental results confirm a reduced area overhead compared to a orthogonal combination of classical test and fault tolerance schemes. In combination with heuristically generated test sequences the test application time and test data volume are reduced significantly.
BibTeX:
@inproceedings{2014_DATE_ImhofW2014,
  author = {Imhof, Michael E. and Wunderlich, Hans-Joachim},
  title = {Bit-Flipping Scan - A Unified Architecture for Fault Tolerance and Offline Test},
  booktitle = {Proc. Design, Automation and Test in Europe (DATE)},
  year = {2014},
  pages = {1--6},
  keywords = {Bit-Flipping Scan, Fault Tolerance, Test, Compaction, ATPG, Satisfiability},
  abstract = {Test is an essential task since the early days of digital circuits. Every produced chip undergoes at least a production test supported by on-chip test infrastructure to reduce test cost. Throughout the technology evolution fault tolerance gained importance and is now necessary in many applications to mitigate soft errors threatening consistent operation. While a variety of effective solutions exists to tackle both areas, test and fault tolerance are often implemented orthogonally, and hence do not exploit the potential synergies of a combined solution.
The unified architecture presented here facilitates fault tolerance and test by combining a checksum of the sequential state with the ability to flip arbitrary bits. Experimental results confirm a reduced area overhead compared to a orthogonal combination of classical test and fault tolerance schemes. In combination with heuristically generated test sequences the test application time and test data volume are reduced significantly.}, url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6800407}, doi = {http://dx.doi.org/10.7873/DATE.2014.206}, file = {http://www.meimhof.de/publications/conference/2014_DATE_ImhofW2014.pdf} }
Baranowski, R., Cook, A., Imhof, M.E., Liu, C. and Wunderlich, H.-J. Synthesis of Workload Monitors for On-Line Stress Prediction 2013 Proc. 16th IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFTS), pp. 137-142 inproceedings DOI URL PDF 
Keywords: Reliability estimation, workload monitoring, aging prediction, NBTI
Abstract: Stringent reliability requirements call for monitoring mechanisms to account for circuit degradation throughout the complete system lifetime. In this work, we efficiently monitor the stress experienced by the system as a result of its current workload. To achieve this goal, we construct workload monitors that observe the most relevant subset of the circuit’s primary and pseudo-primary inputs and produce an accurate stress approximation. The proposed approach enables the timely adoption of suitable countermeasures to reduce or prevent any deviation from the intended circuit behavior. The relation between monitoring accuracy and hardware cost can be adjusted according to design requirements. Experimental results show the efficiency of the proposed approach for the prediction of stress induced by Negative Bias Temperature Instability (NBTI) in critical and nearcritical paths of a digital circuit.
BibTeX:
@inproceedings{2013_DFTS_BaranowskiCILW2013,
  author = {Baranowski, Rafal and Cook, Alejandro and Imhof, Michael E. and Liu, Chang and Wunderlich, Hans-Joachim},
  title = {Synthesis of Workload Monitors for On-Line Stress Prediction},
  booktitle = {Proc. 16th IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFTS)},
  year = {2013},
  pages = {137--142},
  keywords = {Reliability estimation, workload monitoring, aging prediction, NBTI},
  abstract = {Stringent reliability requirements call for monitoring mechanisms to account for circuit degradation throughout the complete system lifetime. In this work, we efficiently monitor the stress experienced by the system as a result of its current workload. To achieve this goal, we construct workload monitors that observe the most relevant subset of the circuit’s primary and pseudo-primary inputs and produce an accurate stress approximation. The proposed approach enables the timely adoption of suitable countermeasures to reduce or prevent any deviation from the intended circuit behavior. The relation between monitoring accuracy and hardware cost can be adjusted according to design requirements. Experimental results show the efficiency of the proposed approach for the prediction of stress induced by Negative Bias Temperature Instability (NBTI) in critical and nearcritical paths of a digital circuit.},
  url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6653596},
  doi = {http://dx.doi.org/10.1109/DFT.2013.6653596},
  file = {http://www.meimhof.de/publications/conference/2013_DFTS_BaranowskiCILW2013.pdf}
}
Zhang, H., Bauer, L., Kochte, M.A., Schneider, E., Braun, C., Imhof, M.E., Wunderlich, H.-J. and Henkel, J. Module Diversification: Fault Tolerance and Aging Mitigation for Runtime Reconfigurable Architectures 2013 Proc. IEEE International Test Conference (ITC), pp. 1-10 inproceedings DOI URL PDF 
Keywords: Reliability, online test, fault-tolerance, aging mitigation, partial runtime reconfiguration, FPGA
Abstract: Runtime reconfigurable architectures based on Field-Programmable Gate Arrays (FPGAs) are attractive for realizing complex applications. However, being manufactured in latest semiconductor process technologies, FPGAs are increasingly prone to aging effects, which reduce the reliability of such systems and must be tackled by aging mitigation and application of fault tolerance techniques.
This paper presents module diversification, a novel design method that creates different configurations for runtime reconfigurable modules. Our method provides fault tolerance by creating the minimal number of configurations such that for any faulty Configurable Logic Block (CLB) there is at least one configuration that does not use that CLB. Additionally, we determine the fraction of time that each configuration should be used to balance the stress and to mitigate the aging process in FPGA-based runtime reconfigurable systems. The generated configurations significantly improve reliability by fault-tolerance and aging mitigation.
BibTeX:
@inproceedings{2013_ITC_ZhangBKSBIWH2013,
  author = {Zhang, Hongyan and Bauer, Lars and Kochte, Michael A. and Schneider, Eric and Braun, Claus and Imhof, Michael E. and Wunderlich, Hans-Joachim and Henkel, Jörg},
  title = {Module Diversification: Fault Tolerance and Aging Mitigation for Runtime Reconfigurable Architectures},
  booktitle = {Proc. IEEE International Test Conference (ITC)},
  year = {2013},
  pages = {1--10},
  keywords = {Reliability, online test, fault-tolerance, aging mitigation, partial runtime reconfiguration, FPGA},
  abstract = {Runtime reconfigurable architectures based on Field-Programmable Gate Arrays (FPGAs) are attractive for realizing complex applications. However, being manufactured in latest semiconductor process technologies, FPGAs are increasingly prone to aging effects, which reduce the reliability of such systems and must be tackled by aging mitigation and application of fault tolerance techniques.
This paper presents module diversification, a novel design method that creates different configurations for runtime reconfigurable modules. Our method provides fault tolerance by creating the minimal number of configurations such that for any faulty Configurable Logic Block (CLB) there is at least one configuration that does not use that CLB. Additionally, we determine the fraction of time that each configuration should be used to balance the stress and to mitigate the aging process in FPGA-based runtime reconfigurable systems. The generated configurations significantly improve reliability by fault-tolerance and aging mitigation.}, url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6651926}, doi = {http://dx.doi.org/10.1109/TEST.2013.6651926}, file = {http://www.meimhof.de/publications/conference/2013_ITC_ZhangBKSBIWH2013.pdf} }
Bauer, L., Braun, C., Imhof, M.E., Kochte, M.A., Schneider, E., Zhang, H., Henkel, J. and Wunderlich, H.-J. Test Strategies for Reliable Runtime Reconfigurable Architectures 2013 IEEE Transactions on Computers
Vol. 62(8), pp. 1494-1507
article DOI URL PDF 
Keywords: FPGA, Reconfigurable Architectures, Online Test
Abstract: FPGA-based reconfigurable systems allow the online adaptation to dynamically changing runtime requirements. The reliability of FPGAs, being manufactured in latest technologies, is threatened by soft errors, as well as aging effects and latent defects.To ensure reliable reconfiguration, it is mandatory to guarantee the correct operation of the reconfigurable fabric. This can be achieved by periodic or on-demand online testing.
This paper presents a reliable system architecture for runtime-reconfigurable systems, which integrates two non-concurrent online test strategies: Pre-configuration online tests (PRET) and post-configuration online tests (PORT). The PRET checks that the reconfigurable hardware is free of faults by periodic or on-demand tests. The PORT has two objectives: It tests reconfigured hardware units after reconfiguration to check that the configuration process completed correctly and it validates the expected functionality. During operation, PORT is used to periodically check the reconfigured hardware units for malfunctions in the programmable logic. Altogether, this paper presents PRET, PORT, and the system integration of such test schemes into a runtime-reconfigurable system, including the resource management and test scheduling.
Experimental results show that the integration of online testing in reconfigurable systems incurs only minimum impact on performance while delivering high fault coverage and low test latency.
BibTeX:
@article{2013_TC_BauerBIKSZHW2013,
  author = {Bauer, Lars and Braun, Claus and Imhof, Michael E. and Kochte, Michael A. and Schneider, Eric and Zhang, Hongyan and Henkel, Jörg and Wunderlich, Hans-Joachim},
  title = {Test Strategies for Reliable Runtime Reconfigurable Architectures},
  journal = {IEEE Transactions on Computers},
  publisher = {IEEE Computer Society},
  year = {2013},
  volume = {62},
  number = {8},
  pages = {1494--1507},
  keywords = {FPGA, Reconfigurable Architectures, Online Test},
  abstract = {FPGA-based reconfigurable systems allow the online adaptation to dynamically changing runtime requirements. The reliability of FPGAs, being manufactured in latest technologies, is threatened by soft errors, as well as aging effects and latent defects.To ensure reliable reconfiguration, it is mandatory to guarantee the correct operation of the reconfigurable fabric. This can be achieved by periodic or on-demand online testing.
This paper presents a reliable system architecture for runtime-reconfigurable systems, which integrates two non-concurrent online test strategies: Pre-configuration online tests (PRET) and post-configuration online tests (PORT). The PRET checks that the reconfigurable hardware is free of faults by periodic or on-demand tests. The PORT has two objectives: It tests reconfigured hardware units after reconfiguration to check that the configuration process completed correctly and it validates the expected functionality. During operation, PORT is used to periodically check the reconfigured hardware units for malfunctions in the programmable logic. Altogether, this paper presents PRET, PORT, and the system integration of such test schemes into a runtime-reconfigurable system, including the resource management and test scheduling.
Experimental results show that the integration of online testing in reconfigurable systems incurs only minimum impact on performance while delivering high fault coverage and low test latency.}, url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6475939}, doi = {http://dx.doi.org/10.1109/TC.2013.53}, file = {http://www.meimhof.de/publications/conference/2013_TC_BauerBIKSZHW2013.pdf} }
Czutro, A., Imhof, M.E., Jiang, J., Mumtaz, A., Sauer, M., Becker, B., Polian, I. and Wunderlich, H.-J. Variation-Aware Fault Grading 2012 Proc. 21st IEEE Asian Test Symposium (ATS), pp. 344-349 inproceedings DOI URL PDF 
Keywords: process variations, fault grading, Monte-Carlo, fault simulation, SAT-based, ATPG, GPGPU
Abstract: An iterative flow to generate test sets providing high fault coverage under extreme parameter variations is presented. The generation is guided by the novel metric of circuit coverage, calculated by massively parallel statistical fault simulation on GPGPUs. Experiments show that the statistical fault coverage of the generated test sets exceeds by far that achieved by standard approaches.
BibTeX:
@inproceedings{2012_ATS_CzutroIJMSBPW2012,
  author = {Czutro, Alexander and Imhof, Michael E. and Jiang, Jie and Mumtaz, Abdullah and Sauer, Matthias and Becker, Bernd and Polian, Ilia and Wunderlich, Hans-Joachim},
  title = {Variation-Aware Fault Grading},
  booktitle = {Proc. 21st IEEE Asian Test Symposium (ATS)},
  year = {2012},
  pages = {344--349},
  keywords = {process variations, fault grading, Monte-Carlo, fault simulation, SAT-based, ATPG, GPGPU},
  abstract = {An iterative flow to generate test sets providing high fault coverage under extreme parameter variations is presented. The generation is guided by the novel metric of circuit coverage, calculated by massively parallel statistical fault simulation on GPGPUs. Experiments show that the statistical fault coverage of the generated test sets exceeds by far that achieved by standard approaches.},
  url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6394227},
  doi = {http://dx.doi.org/10.1109/ATS.2012.14},
  file = {http://www.meimhof.de/publications/conference/2012_ATS_CzutroIJMSBPW2012.pdf}
}
Abdelfattah, M.S., Bauer, L., Braun, C., Imhof, M.E., Kochte, M.A., Zhang, H., Henkel, J. and Wunderlich, H.-J. Transparent Structural Online Test for Reconfigurable Systems 2012 Proc. 18th IEEE International On-Line Testing Symposium (IOLTS), pp. 37-42 inproceedings DOI URL PDF 
Keywords: FPGA, Reconfigurable Architectures, Online Test
Abstract: FPGA-based reconfigurable systems allow the online adaptation to dynamically changing runtime requirements. However, the reliability of modern FPGAs is threatened by latent defects and aging effects. Hence, it is mandatory to ensure the reliable operation of the FPGA’s reconfigurable fabric. This can be achieved by periodic or on-demand online testing.
In this paper, a system-integrated, transparent structural online test method for runtime reconfigurable systems is proposed. The required tests are scheduled like functional workloads, and thorough optimizations of the test overhead reduce the performance impact. The proposed scheme has been implemented on a reconfigurable system. The results demonstrate that thorough testing of the reconfigurable fabric can be achieved at negligible performance impact on the application.
BibTeX:
@inproceedings{2012_IOLTS_AbdelfattahBBIKZHW2012,
  author = {Abdelfattah, Mohamed S. and Bauer, Lars and Braun, Claus and Imhof, Michael E. and Kochte, Michael A. and Zhang, Hongyan and Henkel, Jörg and Wunderlich, Hans-Joachim},
  title = {Transparent Structural Online Test for Reconfigurable Systems},
  booktitle = {Proc. 18th IEEE International On-Line Testing Symposium (IOLTS)},
  year = {2012},
  pages = {37--42},
  keywords = {FPGA, Reconfigurable Architectures, Online Test},
  abstract = {FPGA-based reconfigurable systems allow the online adaptation to dynamically changing runtime requirements. However, the reliability of modern FPGAs is threatened by latent defects and aging effects. Hence, it is mandatory to ensure the reliable operation of the FPGA’s reconfigurable fabric. This can be achieved by periodic or on-demand online testing.
In this paper, a system-integrated, transparent structural online test method for runtime reconfigurable systems is proposed. The required tests are scheduled like functional workloads, and thorough optimizations of the test overhead reduce the performance impact. The proposed scheme has been implemented on a reconfigurable system. The results demonstrate that thorough testing of the reconfigurable fabric can be achieved at negligible performance impact on the application.}, url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6313838}, doi = {http://dx.doi.org/10.1109/IOLTS.2012.6313838}, file = {http://www.meimhof.de/publications/conference/2012_IOLTS_AbdelfattahBBIKZHW2012.pdf} }
Bauer, L., Braun, C., Imhof, M.E., Kochte, M.A., Zhang, H., Wunderlich, H.-J. and Henkel, J. OTERA: Online Test Strategies for Reliable Reconfigurable Architectures 2012 Proc. NASA/ESA Conference on Adaptive Hardware and Systems (AHS), pp. 38-45 inproceedings DOI URL PDF 
Abstract: FPGA-based reconfigurable systems allow the online adaptation to dynamically changing runtime requirements. However, the reliability of FPGAs, which are manufactured in latest technologies, is threatened not only by soft errors, but also by aging effects and latent defects. To ensure reliable reconfiguration, it is mandatory to guarantee the correct operation of the underlying reconfigurable fabric. This can be achieved by periodic or on-demand online testing.
The OTERA project develops and evaluates components and strategies for reconfigurable systems that feature reliable reconfiguration. The research focus ranges from structural online tests for the FPGA infrastructure and functional online tests for the configured functionality up to the resource management and test scheduling. This paper gives an overview of the project tasks and presents first results.
BibTeX:
@inproceedings{2012_AHS_BauerBIKZWH2012,
  author = {Bauer, Lars and Braun, Claus and Imhof, Michael E. and Kochte, Michael A. and Zhang, Hongyan and Wunderlich, Hans-Joachim and Henkel, Jörg},
  title = {OTERA: Online Test Strategies for Reliable Reconfigurable Architectures},
  booktitle = {Proc. NASA/ESA Conference on Adaptive Hardware and Systems (AHS)},
  year = {2012},
  pages = {38--45},
  abstract = {FPGA-based reconfigurable systems allow the online adaptation to dynamically changing runtime requirements. However, the reliability of FPGAs, which are manufactured in latest technologies, is threatened not only by soft errors, but also by aging effects and latent defects. To ensure reliable reconfiguration, it is mandatory to guarantee the correct operation of the underlying reconfigurable fabric. This can be achieved by periodic or on-demand online testing.
The OTERA project develops and evaluates components and strategies for reconfigurable systems that feature reliable reconfiguration. The research focus ranges from structural online tests for the FPGA infrastructure and functional online tests for the configured functionality up to the resource management and test scheduling. This paper gives an overview of the project tasks and presents first results.}, url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6268667}, doi = {http://dx.doi.org/10.1109/AHS.2012.6268667}, file = {http://www.meimhof.de/publications/conference/2012_AHS_BauerBIKZWH2012.pdf} }
Tran, D.A., Virazel, A., Bosio, A., Dilillo, L., Girard, P., Todri, A., Imhof, M.E. and Wunderlich, H.-J. A Pseudo-Dynamic Comparator for Error Detection in Fault Tolerant Architectures 2012 Proc. 30th IEEE VLSI Test Symposium (VTS), pp. 50-55 inproceedings DOI URL PDF 
Keywords: soft error, timing error, fault tolerance, duplication, comparison, power consumption
Abstract: Although CMOS technology scaling offers many advantages, it suffers from robustness problem caused by hard, soft and timing errors. The robustness of future CMOS technology nodes must be improved and the use of fault tolerant architectures is probably the most viable solution. In this context, Duplication/Comparison scheme is widely used for error detection. Traditionally, this scheme uses a static comparator structure that detects hard error. However, it is not effective for soft and timing errors detection due to the possible masking of glitches by the comparator itself. To solve this problem, we propose a pseudo-dynamic comparator architecture that combines a dynamic CMOS transition detector and a static comparator. Experimental results show that the proposed comparator detects not only hard errors but also small glitches related to soft and timing errors. Moreover, its dynamic characteristics allow reducing the power consumption while keeping an equivalent silicon area compared to a static comparator. This study is the first step towards a full fault tolerant approach targeting robustness improvement of CMOS logic circuits.
BibTeX:
@inproceedings{2012_VTS_TranVBDGTIW2012,
  author = {Tran, Duc Anh and Virazel, Arnaud and Bosio, Alberto and Dilillo, Luigi and Girard, Patrick and Todri, Aida and Imhof, Michael E. and Wunderlich, Hans-Joachim},
  title = {A Pseudo-Dynamic Comparator for Error Detection in Fault Tolerant Architectures},
  booktitle = {Proc. 30th IEEE VLSI Test Symposium (VTS)},
  year = {2012},
  pages = {50--55},
  keywords = {soft error, timing error, fault tolerance, duplication, comparison, power consumption},
  abstract = {Although CMOS technology scaling offers many advantages, it suffers from robustness problem caused by hard, soft and timing errors. The robustness of future CMOS technology nodes must be improved and the use of fault tolerant architectures is probably the most viable solution. In this context, Duplication/Comparison scheme is widely used for error detection. Traditionally, this scheme uses a static comparator structure that detects hard error. However, it is not effective for soft and timing errors detection due to the possible masking of glitches by the comparator itself. To solve this problem, we propose a pseudo-dynamic comparator architecture that combines a dynamic CMOS transition detector and a static comparator. Experimental results show that the proposed comparator detects not only hard errors but also small glitches related to soft and timing errors. Moreover, its dynamic characteristics allow reducing the power consumption while keeping an equivalent silicon area compared to a static comparator. This study is the first step towards a full fault tolerant approach targeting robustness improvement of CMOS logic circuits.},
  url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6231079},
  doi = {http://dx.doi.org/10.1109/VTS.2012.6231079},
  file = {http://www.meimhof.de/publications/conference/2012_VTS_TranVBDGTIW2012.pdf}
}
Cook, A., Hellebrand, S., Imhof, M.E., Mumtaz, A. and Wunderlich, H.-J. Built-in Self-Diagnosis Targeting Arbitrary Defects with Partial Pseudo-Exhaustive Test 2012 Proc. 13th IEEE Latin-American Test Workshop (LATW), pp. 1-4 inproceedings DOI URL PDF 
Keywords: Built-in Self-Test, Pseudo-Exhaustive Test, Built-in Self-Diagnosis
Abstract: Pseudo-exhaustive test completely verifies all output functions of a combinational circuit, which provides a high coverage of non-target faults and allows an efficient on-chip implementation. To avoid long test times caused by large output cones, partial pseudo-exhaustive test (P-PET) has been proposed recently. Here only cones with a limited number of inputs are tested exhaustively, and the remaining faults are targeted with deterministic patterns. Using P-PET patterns for built-in diagnosis, however, is challenging because of the large amount of associated response data. This paper presents a built-in diagnosis scheme which only relies on sparsely distributed data in the response sequence, but still preserves the benefits of P-PET.
BibTeX:
@inproceedings{2012_LATW_CookHIMW2012,
  author = {Cook, Alejandro and Hellebrand, Sybille and Imhof, Michael E. and Mumtaz, Abdullah and Wunderlich, Hans-Joachim},
  title = {Built-in Self-Diagnosis Targeting Arbitrary Defects with Partial Pseudo-Exhaustive Test},
  booktitle = {Proc. 13th IEEE Latin-American Test Workshop (LATW)},
  year = {2012},
  pages = {1--4},
  keywords = {Built-in Self-Test, Pseudo-Exhaustive Test, Built-in Self-Diagnosis},
  abstract = {Pseudo-exhaustive test completely verifies all output functions of a combinational circuit, which provides a high coverage of non-target faults and allows an efficient on-chip implementation. To avoid long test times caused by large output cones, partial pseudo-exhaustive test (P-PET) has been proposed recently. Here only cones with a limited number of inputs are tested exhaustively, and the remaining faults are targeted with deterministic patterns. Using P-PET patterns for built-in diagnosis, however, is challenging because of the large amount of associated response data. This paper presents a built-in diagnosis scheme which only relies on sparsely distributed data in the response sequence, but still preserves the benefits of P-PET.},
  url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6261229},
  doi = {http://dx.doi.org/10.1109/LATW.2012.6261229},
  file = {http://www.meimhof.de/publications/conference/2012_LATW_CookHIMW2012.pdf}
}
Mumtaz, A., Imhof, M.E., Holst, S. and Wunderlich, H.-J. Embedded Test for Highly Accurate Defect Localization 2011 Proc. 20th IEEE Asian Test Symposium (ATS), pp. 213-218 inproceedings DOI URL PDF 
Keywords: BIST, Pseudo-Exhaustive Testing, Diagnosis, Debug
Abstract: Modern diagnosis algorithms are able to identify the defective circuit structure directly from existing fail data without being limited to any specialized fault models. Such algorithms however require test patterns with a high defect coverage, posing a major challenge particularly for embedded testing.
In mixed-mode embedded test, a large amount of pseudo-random (PR) patterns are applied prior to deterministic test pattern. Partial Pseudo-Exhaustive Testing (P-PET) replaces these pseudo-random patterns during embedded testing by partial pseudo-exhaustive patterns to test a large portion of a circuit fault-model independently. The overall defect coverage is optimized compared to random testing or deterministic tests using the stuck-at fault model while maintaining a comparable hardware overhead and the same test application time.
This work for the first time combines P-PET with a fault model independent diagnosis algorithm and shows that arbitrary defects can be diagnosed on average much more precisely than with standard embedded testing. The results are compared to random pattern testing and deterministic testing targeting stuck-at faults.
BibTeX:
@inproceedings{2011_ATS_MumtazIHW2011,
  author = {Mumtaz, Abdullah and Imhof, Michael E. and Holst, Stefan and Wunderlich, Hans-Joachim},
  title = {Embedded Test for Highly Accurate Defect Localization},
  booktitle = {Proc. 20th IEEE Asian Test Symposium (ATS)},
  year = {2011},
  pages = {213--218},
  keywords = {BIST, Pseudo-Exhaustive Testing, Diagnosis, Debug},
  abstract = {Modern diagnosis algorithms are able to identify the defective circuit structure directly from existing fail data without being limited to any specialized fault models. Such algorithms however require test patterns with a high defect coverage, posing a major challenge particularly for embedded testing.
In mixed-mode embedded test, a large amount of pseudo-random (PR) patterns are applied prior to deterministic test pattern. Partial Pseudo-Exhaustive Testing (P-PET) replaces these pseudo-random patterns during embedded testing by partial pseudo-exhaustive patterns to test a large portion of a circuit fault-model independently. The overall defect coverage is optimized compared to random testing or deterministic tests using the stuck-at fault model while maintaining a comparable hardware overhead and the same test application time.
This work for the first time combines P-PET with a fault model independent diagnosis algorithm and shows that arbitrary defects can be diagnosed on average much more precisely than with standard embedded testing. The results are compared to random pattern testing and deterministic testing targeting stuck-at faults.}, url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6114541}, doi = {http://dx.doi.org/10.1109/ATS.2011.60}, file = {http://www.meimhof.de/publications/conference/2011_ATS_MumtazIHW2011.pdf} }
Baranowski, R., Di Carlo, S., Hatami, N., Imhof, M.E., Kochte, M.A., Prinetto, P., Wunderlich, H.-J. and Zoellin, C.G. Efficient Multi-level Fault Simulation of HW/SW Systems for Structural Faults 2011 SCIENCE CHINA Information Sciences
Vol. 54(9), pp. 1784-1796
article DOI URL PDF 
Keywords: Fault simulation, multi-level, transaction-level modeling
Abstract: In recent technology nodes, reliability is increasingly considered a part of the standard design flow to be taken into account at all levels of embedded systems design. While traditional fault simulation techniques based on low-level models at gate- and register transfer-level offer high accuracy, they are too inefficient to properly cope with the complexity of modern embedded systems. Moreover, they do not allow for early exploration of design alternatives when a detailed model of the whole system is not yet available, which is highly required to increase the efficiency and quality of the design flow. Multi-level models that combine the simulation efficiency of high abstraction models with the accuracy of low-level models are therefore essential to efficiently evaluate the impact of physical defects on the system. This paper proposes a methodology to efficiently implement concurrent multi-level fault simulation across gate- and transaction-level models in an integrated simulation environment. It leverages state-of-the-art techniques for efficient fault simulation of structural faults together with transaction-level modeling. This combination of different models allows to accurately evaluate the impact of faults on the entire hardware/software system while keeping the computational effort low. Moreover, since only selected portions of the system require low-level models, early exploration of different design alternatives is efficiently supported. Experimental results obtained from three case studies are presented to demonstrate the high accuracy of the proposed method when compared with a standard gate/RT mixed-level approach and the strong improvement of simulation time which is reduced by four orders of magnitude in average.
BibTeX:
@article{2011_SCIS_BaranowskiDHIKPWZ2011,
  author = {Baranowski, Rafal and Di Carlo, Stefano and Hatami, Nadereh and Imhof, Michael E. and Kochte, Michael A. and Prinetto, Paolo and Wunderlich, Hans-Joachim and Zoellin, Christian G.},
  title = {Efficient Multi-level Fault Simulation of HW/SW Systems for Structural Faults},
  journal = {SCIENCE CHINA Information Sciences},
  year = {2011},
  volume = {54},
  number = {9},
  pages = {1784--1796},
  keywords = {Fault simulation, multi-level, transaction-level modeling},
  abstract = {In recent technology nodes, reliability is increasingly considered a part of the standard design flow to be taken into account at all levels of embedded systems design. While traditional fault simulation techniques based on low-level models at gate- and register transfer-level offer high accuracy, they are too inefficient to properly cope with the complexity of modern embedded systems. Moreover, they do not allow for early exploration of design alternatives when a detailed model of the whole system is not yet available, which is highly required to increase the efficiency and quality of the design flow. Multi-level models that combine the simulation efficiency of high abstraction models with the accuracy of low-level models are therefore essential to efficiently evaluate the impact of physical defects on the system. This paper proposes a methodology to efficiently implement concurrent multi-level fault simulation across gate- and transaction-level models in an integrated simulation environment. It leverages state-of-the-art techniques for efficient fault simulation of structural faults together with transaction-level modeling. This combination of different models allows to accurately evaluate the impact of faults on the entire hardware/software system while keeping the computational effort low. Moreover, since only selected portions of the system require low-level models, early exploration of different design alternatives is efficiently supported. Experimental results obtained from three case studies are presented to demonstrate the high accuracy of the proposed method when compared with a standard gate/RT mixed-level approach and the strong improvement of simulation time which is reduced by four orders of magnitude in average.},
  url = {http://www.springerlink.com/content/538081568x71808r/},
  doi = {http://dx.doi.org/10.1007/s11432-011-4366-9},
  file = {http://www.meimhof.de/publications/conference/2011_SCIS_BaranowskiDHIKPWZ2011.pdf}
}
Mumtaz, A., Imhof, M.E., Holst, S. and Wunderlich, H.-J. Eingebetteter Test zur hochgenauen Defekt-Lokalisierung 2011 Proc. 5. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE), pp. 43-47 inproceedings URL PDF 
Keywords: Eingebetteter Selbsttest, Pseudoerschöpfender Test, Diagnose, Debug
BIST, Pseudo-Exhaustive Testing, Diagnosis, Debug
Abstract: Moderne Diagnosealgorithmen können aus den vorhandenen Fehlerdaten direkt die defekte Schaltungsstruktur identifizieren, ohne sich auf spezialisierte Fehlermodelle zu beschränken. Solche Algorithmen benötigen jedoch Testmuster mit einer hohen Defekterfassung. Dies ist insbesondere im eingebetteten Test eine große Herausforderung.
Der Partielle Pseudo-Erschöpfende Test (P-PET) ist eine Methode, um die Defekterfassung im Vergleich zu einem Zufallstest oder einem deterministischen Test für das Haftfehlermodell zu erhöhen. Wird die im eingebetteten Test übliche Phase der vorgeschalteten Erzeugung von Pseudozufallsmustern durch die Erzeugung partieller pseudo-erschöpfender Muster ersetzt, kann bei vergleichbarem Hardware-Aufwand und gleicher Testzeit eine optimale Defekterfassung für den größten Schaltungsteil erreicht werden.
Diese Arbeit kombiniert zum ersten Mal P-PET mit einem fehlermodell-unabhängigen Diagnosealgorithmus und zeigt, dass sich beliebige Defekte im Mittel wesentlich präziser diagnostizieren lassen als mit Zufallsmustern oder einem deterministischen Test für Haftfehler.

Modern diagnosis algorithms are able to identify the defective circuit structure directly from existing fail data without being limited to any specialized fault models. Such algorithms however require test patterns with a high defect coverage, posing a major challenge particularly for embedded testing.
In mixed-mode embedded test, a large amount of pseudo-random patterns are applied prior to deterministic test pattern. Partial Pseudo-Exhaustive Testing (P-PET) replaces these pseudo-random patterns during embedded testing by partial pseudo-exhaustive patterns to test a large portion of a circuit fault-model independently. The overall defect coverage is optimized compared to random testing or deterministic tests using the stuck-at fault model while maintaining a comparable hardware overhead and the same test application time.
This work for the first time combines P-PET with a fault model independent diagnosis algorithm and shows that arbitrary defects can be diagnosed on average much more precisely than with standard embedded testing. The results are compared to random pattern testing and deterministic testing targeting stuck-at faults.

BibTeX:
@inproceedings{2011_ZuE_MumtazIHW2011,
  author = {Mumtaz, Abdullah and Imhof, Michael E. and Holst, Stefan and Wunderlich, Hans-Joachim},
  title = {Eingebetteter Test zur hochgenauen Defekt-Lokalisierung},
  booktitle = {Proc. 5. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE)},
  year = {2011},
  pages = {43--47},
  keywords = {Eingebetteter Selbsttest, Pseudoerschöpfender Test, Diagnose, Debug
BIST, Pseudo-Exhaustive Testing, Diagnosis, Debug}, abstract = {Moderne Diagnosealgorithmen können aus den vorhandenen Fehlerdaten direkt die defekte Schaltungsstruktur identifizieren, ohne sich auf spezialisierte Fehlermodelle zu beschränken. Solche Algorithmen benötigen jedoch Testmuster mit einer hohen Defekterfassung. Dies ist insbesondere im eingebetteten Test eine große Herausforderung.
Der Partielle Pseudo-Erschöpfende Test (P-PET) ist eine Methode, um die Defekterfassung im Vergleich zu einem Zufallstest oder einem deterministischen Test für das Haftfehlermodell zu erhöhen. Wird die im eingebetteten Test übliche Phase der vorgeschalteten Erzeugung von Pseudozufallsmustern durch die Erzeugung partieller pseudo-erschöpfender Muster ersetzt, kann bei vergleichbarem Hardware-Aufwand und gleicher Testzeit eine optimale Defekterfassung für den größten Schaltungsteil erreicht werden.
Diese Arbeit kombiniert zum ersten Mal P-PET mit einem fehlermodell-unabhängigen Diagnosealgorithmus und zeigt, dass sich beliebige Defekte im Mittel wesentlich präziser diagnostizieren lassen als mit Zufallsmustern oder einem deterministischen Test für Haftfehler.

Modern diagnosis algorithms are able to identify the defective circuit structure directly from existing fail data without being limited to any specialized fault models. Such algorithms however require test patterns with a high defect coverage, posing a major challenge particularly for embedded testing.
In mixed-mode embedded test, a large amount of pseudo-random patterns are applied prior to deterministic test pattern. Partial Pseudo-Exhaustive Testing (P-PET) replaces these pseudo-random patterns during embedded testing by partial pseudo-exhaustive patterns to test a large portion of a circuit fault-model independently. The overall defect coverage is optimized compared to random testing or deterministic tests using the stuck-at fault model while maintaining a comparable hardware overhead and the same test application time.
This work for the first time combines P-PET with a fault model independent diagnosis algorithm and shows that arbitrary defects can be diagnosed on average much more precisely than with standard embedded testing. The results are compared to random pattern testing and deterministic testing targeting stuck-at faults.}, url = {http://www.vde-verlag.de/proceedings-de/453357010.html}, file = {http://www.meimhof.de/publications/conference/2011_ZuE_MumtazIHW2011.pdf} }

Imhof, M.E. and Wunderlich, H.-J. Korrektur transienter Fehler in eingebetteten Speicherelementen 2011 Proc. 5. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE), pp. 76-83 inproceedings URL PDF 
Keywords: Transiente Fehler, Soft Error, Single Event Upset (SEU), Erkennung, Lokalisierung, Korrektur, Latch, Register
Single Event Effect, Soft Error, Single Event Upset (SEU), Detection, Localization, Correction, Latch, Register
Abstract: In der vorliegenden Arbeit wird ein Schema zur Korrektur von transienten Fehlern in eingebetteten, pegelgesteuerten Speicherelementen vorgestellt. Das Schema verwendet Struktur- und Informationsredundanz, um Single Event Upsets (SEUs) in Registern zu erkennen und zu korrigieren. Mit geringem Mehraufwand kann ein betroffenes Bit lokalisiert und mit einem hier vorgestellten Bit-Flipping-Latch (BFL) rückgesetzt werden, so dass die Zahl zusätzlicher Taktzyklen im Fehlerfall minimiert wird. Ein Vergleich mit anderen Erkennungs- und
Korrekturschemata zeigt einen deutlich reduzierten Hardwaremehraufwand.

In this paper a soft error correction scheme for embedded level sensitive storage elements is presented. The scheme employs structural- and information-redundancy to detect and correct Single Event Upsets (SEUs) in registers. With low additional hardware overhead the affected bit can be localized and reset with the presented Bit-Flipping-Latch (BFL), thereby minimizing the amount of additional clock cycles in the faulty case. A comparison with other detection and correction schemes shows a significantly lower hardware overhead.

BibTeX:
@inproceedings{2011_ZuE_ImhofW2011,
  author = {Imhof, Michael E. and Wunderlich, Hans-Joachim},
  title = {Korrektur transienter Fehler in eingebetteten Speicherelementen},
  booktitle = {Proc. 5. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE)},
  year = {2011},
  pages = {76--83},
  keywords = {Transiente Fehler, Soft Error, Single Event Upset (SEU), Erkennung, Lokalisierung, Korrektur, Latch, Register
Single Event Effect, Soft Error, Single Event Upset (SEU), Detection, Localization, Correction, Latch, Register}, abstract = {In der vorliegenden Arbeit wird ein Schema zur Korrektur von transienten Fehlern in eingebetteten, pegelgesteuerten Speicherelementen vorgestellt. Das Schema verwendet Struktur- und Informationsredundanz, um Single Event Upsets (SEUs) in Registern zu erkennen und zu korrigieren. Mit geringem Mehraufwand kann ein betroffenes Bit lokalisiert und mit einem hier vorgestellten Bit-Flipping-Latch (BFL) rückgesetzt werden, so dass die Zahl zusätzlicher Taktzyklen im Fehlerfall minimiert wird. Ein Vergleich mit anderen Erkennungs- und
Korrekturschemata zeigt einen deutlich reduzierten Hardwaremehraufwand.

In this paper a soft error correction scheme for embedded level sensitive storage elements is presented. The scheme employs structural- and information-redundancy to detect and correct Single Event Upsets (SEUs) in registers. With low additional hardware overhead the affected bit can be localized and reset with the presented Bit-Flipping-Latch (BFL), thereby minimizing the amount of additional clock cycles in the faulty case. A comparison with other detection and correction schemes shows a significantly lower hardware overhead.}, url = {http://www.vde-verlag.de/proceedings-de/453357015.html}, file = {http://www.meimhof.de/publications/conference/2011_ZuE_ImhofW2011.pdf} }

Mumtaz, A., Imhof, M.E. and Wunderlich, H.-J. P-PET: Partial Pseudo-Exhaustive Test for High Defect Coverage 2011 Proc. IEEE International Test Conference (ITC), pp. 1-8 inproceedings DOI URL PDF 
Keywords: BIST, Pseudo-Exhaustive Testing, Defect Coverage, N-Detect
Abstract: Pattern generation for embedded testing often consists of a phase generating random patterns and a second phase where deterministic patterns are applied. This paper presents a method which optimizes the first phase significantly and increases the defect coverage, while reducing the number of deterministic patterns required in the second phase.
The method is based on the concept of pseudo-exhaustive testing (PET), which was proposed as a method for fault model independent testing with high defect coverage. As its test length can grow exponentially with the circuit size, an application to larger circuits is usually impractical.
In this paper, partial pseudo-exhaustive testing (P-PET) is presented as a synthesis technique for multiple polynomial feedback shift registers. It scales with actual technology and is comparable with the usual pseudo-random (PR) pattern testing regarding test costs and test application time. The advantages with respect to the defect coverage, N-detectability for stuck-at faults and the reduction of deterministic test lengths are shown using state-of-the art industrial circuits.
BibTeX:
@inproceedings{2011_ITC_MumtazIW2011,
  author = {Mumtaz, Abdullah and Imhof, Michael E. and Wunderlich, Hans-Joachim},
  title = {P-PET: Partial Pseudo-Exhaustive Test for High Defect Coverage},
  booktitle = {Proc. IEEE International Test Conference (ITC)},
  year = {2011},
  pages = {1--8},
  keywords = {BIST, Pseudo-Exhaustive Testing, Defect Coverage, N-Detect},
  abstract = {Pattern generation for embedded testing often consists of a phase generating random patterns and a second phase where deterministic patterns are applied. This paper presents a method which optimizes the first phase significantly and increases the defect coverage, while reducing the number of deterministic patterns required in the second phase.
The method is based on the concept of pseudo-exhaustive testing (PET), which was proposed as a method for fault model independent testing with high defect coverage. As its test length can grow exponentially with the circuit size, an application to larger circuits is usually impractical.
In this paper, partial pseudo-exhaustive testing (P-PET) is presented as a synthesis technique for multiple polynomial feedback shift registers. It scales with actual technology and is comparable with the usual pseudo-random (PR) pattern testing regarding test costs and test application time. The advantages with respect to the defect coverage, N-detectability for stuck-at faults and the reduction of deterministic test lengths are shown using state-of-the art industrial circuits.}, url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6139130}, doi = {http://dx.doi.org/10.1109/TEST.2011.6139130}, file = {http://www.meimhof.de/publications/conference/2011_ITC_MumtazIW2011.pdf} }
Imhof, M.E. and Wunderlich, H.-J. Soft Error Correction in Embedded Storage Elements 2011 Proc. 17th IEEE International On-Line Testing Symposium (IOLTS), pp. 169-174 inproceedings DOI URL PDF 
Keywords: Single Event Effect, Correction, Latch, Register
Abstract: In this paper a soft error correction scheme for embedded storage elements in level sensitive designs is presented. It employs space redundancy to detect and locate Single Event Upsets (SEUs). It is able to detect SEUs in registers and employ architectural replay to perform correction with low additional hardware overhead. Together with the proposed bit flipping latch an online correction can be implemented on bit level with a minimal loss of clock cycles. A comparison with other detection and correction schemes shows a significantly lower hardware overhead.
BibTeX:
@inproceedings{2011_IOLTS_ImhofW2011,
  author = {Imhof, Michael E. and Wunderlich, Hans-Joachim},
  title = {Soft Error Correction in Embedded Storage Elements},
  booktitle = {Proc. 17th IEEE International On-Line Testing Symposium (IOLTS)},
  year = {2011},
  pages = {169--174},
  keywords = {Single Event Effect, Correction, Latch, Register},
  abstract = {In this paper a soft error correction scheme for embedded storage elements in level sensitive designs is presented. It employs space redundancy to detect and locate Single Event Upsets (SEUs). It is able to detect SEUs in registers and employ architectural replay to perform correction with low additional hardware overhead. Together with the proposed bit flipping latch an online correction can be implemented on bit level with a minimal loss of clock cycles. A comparison with other detection and correction schemes shows a significantly lower hardware overhead.},
  url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5993832},
  doi = {http://dx.doi.org/10.1109/IOLTS.2011.5993832},
  file = {http://www.meimhof.de/publications/conference/2011_IOLTS_ImhofW2011.pdf}
}
Kochte, M.A., Zoellin, C.G., Baranowski, R., Imhof, M.E., Wunderlich, H.-J., Hatami, N., Di Carlo, S. and Prinetto, P. Efficient Simulation of Structural Faults for the Reliability Evaluation at System-Level 2010 Proc. 19th IEEE Asian Test Symposium (ATS), pp. 3-8 inproceedings DOI URL PDF 
Keywords: Fault simulation, multi-level, transaction-level modeling
Abstract: In recent technology nodes, reliability is considered a part of the standard design flow at all levels of embedded system design. While techniques that use only low-level models at gate- and register transfer-level offer high accuracy, they are too inefficient to consider the overall application of the embedded system. Multi-level models with high abstraction are essential to efficiently evaluate the impact of physical defects on the system. This paper provides a methodology that leverages state-of-the-art techniques for efficient fault simulation of structural faults together with transaction-level modeling. This way it is possible to accurately evaluate the impact of the faults on the entire hardware/software system. A case study of a system consisting of hardware and software for image compression and data encryption is presented and the method is compared to a standard gate/RT mixed-level approach.
BibTeX:
@inproceedings{2010_ATS_KochteZBIWHDP2010,
  author = {Kochte, Michael A. and Zoellin, Christian G. and Baranowski, Rafal and Imhof, Michael E. and Wunderlich, Hans-Joachim and Hatami, Nadereh and Di Carlo, Stefano and Prinetto, Paolo},
  title = {Efficient Simulation of Structural Faults for the Reliability Evaluation at System-Level},
  booktitle = {Proc. 19th IEEE Asian Test Symposium (ATS)},
  year = {2010},
  pages = {3--8},
  keywords = {Fault simulation, multi-level, transaction-level modeling},
  abstract = {In recent technology nodes, reliability is considered a part of the standard design flow at all levels of embedded system design. While techniques that use only low-level models at gate- and register transfer-level offer high accuracy, they are too inefficient to consider the overall application of the embedded system. Multi-level models with high abstraction are essential to efficiently evaluate the impact of physical defects on the system. This paper provides a methodology that leverages state-of-the-art techniques for efficient fault simulation of structural faults together with transaction-level modeling. This way it is possible to accurately evaluate the impact of the faults on the entire hardware/software system. A case study of a system consisting of hardware and software for image compression and data encryption is presented and the method is compared to a standard gate/RT mixed-level approach.},
  url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5692211},
  doi = {http://dx.doi.org/10.1109/ATS.2010.10},
  file = {http://www.meimhof.de/publications/conference/2010_ATS_KochteZBIWHDP2010.pdf}
}
Kochte, M.A., Zoellin, C.G., Baranowski, R., Imhof, M.E., Wunderlich, H.-J., Hatami, N., Di Carlo, S. and Prinetto, P. System Reliability Evaluation Using Concurrent Multi-Level Simulation of Structural Faults 2010 Proc. IEEE International Test Conference (ITC), pp. 1 inproceedings DOI URL PDF 
Abstract: This paper provides a methodology that leverages state-of-the-art techniques for efficient fault simulation of structural faults together with transaction level modeling. This way it is possible to accurately evaluate the impact of the faults on the entire hardware/software system.
BibTeX:
@inproceedings{2010_ITC_KochteZBIWHDP2010,
  author = {Kochte, Michael A. and Zoellin, Christian G. and Baranowski, Rafal and Imhof, Michael E. and Wunderlich, Hans-Joachim and Hatami, Nadereh and Di Carlo, Stefano and Prinetto, Paolo},
  title = {System Reliability Evaluation Using Concurrent Multi-Level Simulation of Structural Faults},
  booktitle = {Proc. IEEE International Test Conference (ITC)},
  year = {2010},
  pages = {1},
  abstract = {This paper provides a methodology that leverages state-of-the-art techniques for efficient fault simulation of structural faults together with transaction level modeling. This way it is possible to accurately evaluate the impact of the faults on the entire hardware/software system.},
  url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5699309},
  doi = {http://dx.doi.org/10.1109/TEST.2010.5699309},
  file = {http://www.meimhof.de/publications/conference/2010_ITC_KochteZBIWHDP2010.pdf}
}
Kochte, M.A., Zoellin, C.G., Baranowski, R., Imhof, M.E., Wunderlich, H.-J., Hatami, N., Di Carlo, S. and Prinetto, P. Effiziente Simulation von strukturellen Fehlern für die Zuverlässigkeitsanalyse auf Systemebene 2010 Proc. 4. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE), pp. 25-32 inproceedings URL PDF 
Keywords: Transaktionsebenen-Modellierung, Ebenenübergreifende Fehlersimulation
Transaction level modelling, multi-level fault simulation
Abstract: In aktueller Prozesstechnologie muss die Zuverlässigkeit in allen Entwurfsschritten von eingebetteten Systemen betrachtet werden. Methoden, die nur Modelle auf unteren Abstraktionsebenen, wie Gatter- oder Registertransferebene, verwenden, bieten zwar eine hohe Genauigkeit, sind aber zu ineffizient, um komplexe Hardware/Software-Systeme zu analysieren. Hier werden ebenenübergreifende Verfahren benötigt, die auch hohe Abstraktion unterstützen, um effizient die Auswirkungen von Defekten im System bewerten zu können. Diese Arbeit stellt eine Methode vor, die aktuelle Techniken für die effiziente Simulation von strukturellen Fehlern mit Systemmodellierung auf Transaktionsebene kombiniert. Auf dieseWeise ist es möglich, eine präzise Bewertung der Fehlerauswirkung auf das gesamte Hardware/Software-System durchzuführen. Die Ergebnisse einer Fallstudie eines Hardware/Software-Systems zur Datenverschlüsselung und Bildkompression werden diskutiert und die Methode wird mit einem Standard-Fehlerinjektionsverfahren verglichen.

Reliability assessment has become indispensable in the course of embedded systems development. Evaluation at gate- and register transfer level is accurate but requires high computational effort and is therefore not applicable to contemporary hardware/software systems. Precise low-level fault simulation techniques need to be combined with fast, high-level models to evaluate the effect of physical defects on entire system operation. In this work, state-of-the-art techniques for parallel fault simulation at gate-level are combined with concurrent system simulation at transaction-level. The proposed approach enables accurate evaluation of structural fault effects on the operation of complex hardware/software systems. Its accuracy and performance gain is confirmed by a comparison with a standard gate-level/RTL mixed-level approach in several case studies.

BibTeX:
@inproceedings{2010_ZuE_KochteZBIWHDP2010,
  author = {Kochte, Michael A. and Zoellin, Christian G. and Baranowski, Rafal and Imhof, Michael E. and Wunderlich, Hans-Joachim and Hatami, Nadereh and Di Carlo, Stefano and Prinetto, Paolo},
  title = {Effiziente Simulation von strukturellen Fehlern für die Zuverlässigkeitsanalyse auf Systemebene},
  booktitle = {Proc. 4. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE)},
  year = {2010},
  pages = {25--32},
  keywords = {Transaktionsebenen-Modellierung, Ebenenübergreifende Fehlersimulation
Transaction level modelling, multi-level fault simulation}, abstract = {In aktueller Prozesstechnologie muss die Zuverlässigkeit in allen Entwurfsschritten von eingebetteten Systemen betrachtet werden. Methoden, die nur Modelle auf unteren Abstraktionsebenen, wie Gatter- oder Registertransferebene, verwenden, bieten zwar eine hohe Genauigkeit, sind aber zu ineffizient, um komplexe Hardware/Software-Systeme zu analysieren. Hier werden ebenenübergreifende Verfahren benötigt, die auch hohe Abstraktion unterstützen, um effizient die Auswirkungen von Defekten im System bewerten zu können. Diese Arbeit stellt eine Methode vor, die aktuelle Techniken für die effiziente Simulation von strukturellen Fehlern mit Systemmodellierung auf Transaktionsebene kombiniert. Auf dieseWeise ist es möglich, eine präzise Bewertung der Fehlerauswirkung auf das gesamte Hardware/Software-System durchzuführen. Die Ergebnisse einer Fallstudie eines Hardware/Software-Systems zur Datenverschlüsselung und Bildkompression werden diskutiert und die Methode wird mit einem Standard-Fehlerinjektionsverfahren verglichen.

Reliability assessment has become indispensable in the course of embedded systems development. Evaluation at gate- and register transfer level is accurate but requires high computational effort and is therefore not applicable to contemporary hardware/software systems. Precise low-level fault simulation techniques need to be combined with fast, high-level models to evaluate the effect of physical defects on entire system operation. In this work, state-of-the-art techniques for parallel fault simulation at gate-level are combined with concurrent system simulation at transaction-level. The proposed approach enables accurate evaluation of structural fault effects on the operation of complex hardware/software systems. Its accuracy and performance gain is confirmed by a comparison with a standard gate-level/RTL mixed-level approach in several case studies.}, url = {http://www.vde-verlag.de/proceedings-de/453299003.html}, file = {http://www.meimhof.de/publications/conference/2010_ZuE_KochteZBIWHDP2010.pdf} }

Kochte, M.A., Zoellin, C.G., Imhof, M.E., Salimi Khaligh, R., Radetzki, M., Wunderlich, H.-J., Di Carlo, S. and Prinetto, P. Test Exploration and Validation Using Transaction Level Models 2009 Proc. Design, Automation and Test in Europe (DATE), pp. 1250-1253 inproceedings DOI URL PDF 
Keywords: Test of systems-on-chip, design-for-test, transaction level modeling
Abstract: The complexity of the test infrastructure and test strategies in systems-on-chip approaches the complexity of the functional design space. This paper presents test design space exploration and validation of test strategies and schedules using transaction level models (TLMs). Since many aspects of testing involve the transfer of a significant amount of test stimuli and responses, the communication-centric view of TLMs suits this purpose exceptionally well.
BibTeX:
@inproceedings{2009_DATE_KochteZISRWDP2009,
  author = {Kochte, Michael A. and Zoellin, Christian G. and Imhof, Michael E. and Salimi Khaligh, Rauf and Radetzki, Martin and Wunderlich, Hans-Joachim and Di Carlo, Stefano and Prinetto, Paolo},
  title = {Test Exploration and Validation Using Transaction Level Models},
  booktitle = {Proc. Design, Automation and Test in Europe (DATE)},
  year = {2009},
  pages = {1250--1253},
  keywords = {Test of systems-on-chip, design-for-test, transaction level modeling},
  abstract = {The complexity of the test infrastructure and test strategies in systems-on-chip approaches the complexity of the functional design space. This paper presents test design space exploration and validation of test strategies and schedules using transaction level models (TLMs). Since many aspects of testing involve the transfer of a significant amount of test stimuli and responses, the communication-centric view of TLMs suits this purpose exceptionally well.},
  url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5090856},
  doi = {http://dx.doi.org/10.1109/DATE.2009.5090856},
  file = {http://www.meimhof.de/publications/conference/2009_DATE_KochteZISRWDP2009.pdf}
}
Imhof, M.E., Wunderlich, H.-J. and Zoellin, C.G. Erkennung von transienten Fehlern in Schaltungen mit reduzierter Verlustleistung 2008 Proc. 2. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE), pp. 107-114 inproceedings URL PDF 
Keywords: Robustes Design, Fehlertoleranz, Verlustleistung, Latch, Register, Single Event Effect
Robust design, fault tolerance, power dissipation, latch, register, single event effects
Abstract: Für Speicherfelder sind fehlerkorrigierende Codes die vorherrschende Methode, um akzeptable Fehlerraten zu erreichen. In vielen aktuellen Schaltungen erreicht die Zahl der Speicherelemente in freier Logik die Größenordnung der Zahl von SRAM-Zellen vor wenigen Jahren. Zur Reduktion der Verlustleistung wird häufig der Takt der pegelgesteuerten Speicherelemente unterdrückt und die Speicherelemente müssen ihren Zustand über lange Zeitintervalle halten. Die Notwendigkeit Speicherzellen abzusichern wird zusätzlich durch die Miniaturisierung verstärkt, die zu einer erhöhten Empfindlichkeit der Speicherelemente geführt hat. Dieser Artikel stellt eine Methode zur fehlertoleranten Anordnung von pegelgesteuerten Speicherelementen vor, die bei unterdrücktem Takt Einfachfehler lokalisieren und Mehrfachfehler erkennen kann. Bei aktiviertem Takt können Einfach- und Mehrfachfehler erkannt werden. Die Register können ähnlich wie Prüfpfade effizient in den Entwurfsgang integriert werden. Die Diagnoseinformation kann auf Modulebene leicht berechnet und genutzt werden.

For memories error correcting codes are the method of choice to guarantee acceptable error rates. In many current designs the number of storage elements in random logic reaches the number of SRAM-cells some years ago. Clock-gating is often employed to reduce the power dissipation of level-sensitive storage elements while the elements have to retain their state over long periods of time. The necessity to protect storage elements is amplified by the miniaturization, which leads to an increased susceptibility of the storage elements. This article proposes a method for the fault-tolerant arrangement of level-sensitive storage elements, which can locate single faults and detect multiple faults while being clock-gated. With active clock single and multiple faults can be detected. The registers can be efficiently integrated similar to the scan design flow. The diagnostic information can be easily computed and used at module level.

BibTeX:
@inproceedings{2008_ZuE_ImhofWZ2008,
  author = {Imhof, Michael E. and Wunderlich, Hans-Joachim and Zoellin, Christian G.},
  title = {Erkennung von transienten Fehlern in Schaltungen mit reduzierter Verlustleistung},
  booktitle = {Proc. 2. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE)},
  year = {2008},
  pages = {107--114},
  keywords = {Robustes Design, Fehlertoleranz, Verlustleistung, Latch, Register, Single Event Effect
Robust design, fault tolerance, power dissipation, latch, register, single event effects}, abstract = {Für Speicherfelder sind fehlerkorrigierende Codes die vorherrschende Methode, um akzeptable Fehlerraten zu erreichen. In vielen aktuellen Schaltungen erreicht die Zahl der Speicherelemente in freier Logik die Größenordnung der Zahl von SRAM-Zellen vor wenigen Jahren. Zur Reduktion der Verlustleistung wird häufig der Takt der pegelgesteuerten Speicherelemente unterdrückt und die Speicherelemente müssen ihren Zustand über lange Zeitintervalle halten. Die Notwendigkeit Speicherzellen abzusichern wird zusätzlich durch die Miniaturisierung verstärkt, die zu einer erhöhten Empfindlichkeit der Speicherelemente geführt hat. Dieser Artikel stellt eine Methode zur fehlertoleranten Anordnung von pegelgesteuerten Speicherelementen vor, die bei unterdrücktem Takt Einfachfehler lokalisieren und Mehrfachfehler erkennen kann. Bei aktiviertem Takt können Einfach- und Mehrfachfehler erkannt werden. Die Register können ähnlich wie Prüfpfade effizient in den Entwurfsgang integriert werden. Die Diagnoseinformation kann auf Modulebene leicht berechnet und genutzt werden.

For memories error correcting codes are the method of choice to guarantee acceptable error rates. In many current designs the number of storage elements in random logic reaches the number of SRAM-cells some years ago. Clock-gating is often employed to reduce the power dissipation of level-sensitive storage elements while the elements have to retain their state over long periods of time. The necessity to protect storage elements is amplified by the miniaturization, which leads to an increased susceptibility of the storage elements. This article proposes a method for the fault-tolerant arrangement of level-sensitive storage elements, which can locate single faults and detect multiple faults while being clock-gated. With active clock single and multiple faults can be detected. The registers can be efficiently integrated similar to the scan design flow. The diagnostic information can be easily computed and used at module level.}, url = {http://www.vde-verlag.de/proceedings-de/453119017.html}, file = {http://www.meimhof.de/publications/conference/2008_ZuE_ImhofWZ2008.pdf} }

Imhof, M.E., Wunderlich, H.-J. and Zoellin, C.G. Integrating Scan Design and Soft Error Correction in Low-Power Applications 2008 Proc. 14th IEEE International On-Line Testing Symposium (IOLTS), pp. 59-64 inproceedings DOI URL PDF 
Keywords: Robust design, fault tolerance, low power, latch, register, single event effects
Abstract: Error correcting coding is the dominant technique to achieve acceptable soft-error rates in memory arrays. In many modern circuits, the number of memory elements in the random logic is in the order of the number of SRAM cells on chips only a few years ago. Often latches are clock gated and have to retain their states during longer periods. Moreover, miniaturization has led to elevated susceptibility of the memory elements and further increases the need for protection. This paper presents a fault-tolerant register latch organization that is able to detect single-bit errors while it is clock gated. With active clock, single and multiple errors are detected. The registers can be efficiently integrated similar to the scan design flow, and error detecting or locating information can be collected at module level. The resulting structure can be efficiently reused for offline and general online testing.
BibTeX:
@inproceedings{2008_IOLTS_ImhofWZ2008,
  author = {Imhof, Michael E. and Wunderlich, Hans-Joachim and Zoellin, Christian G.},
  title = {Integrating Scan Design and Soft Error Correction in Low-Power Applications},
  booktitle = {Proc. 14th IEEE International On-Line Testing Symposium (IOLTS)},
  year = {2008},
  pages = {59--64},
  keywords = {Robust design, fault tolerance, low power, latch, register, single event effects},
  abstract = {Error correcting coding is the dominant technique to achieve acceptable soft-error rates in memory arrays. In many modern circuits, the number of memory elements in the random logic is in the order of the number of SRAM cells on chips only a few years ago. Often latches are clock gated and have to retain their states during longer periods. Moreover, miniaturization has led to elevated susceptibility of the memory elements and further increases the need for protection. This paper presents a fault-tolerant register latch organization that is able to detect single-bit errors while it is clock gated. With active clock, single and multiple errors are detected. The registers can be efficiently integrated similar to the scan design flow, and error detecting or locating information can be collected at module level. The resulting structure can be efficiently reused for offline and general online testing.},
  url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4567063},
  doi = {http://dx.doi.org/10.1109/IOLTS.2008.31},
  file = {http://www.meimhof.de/publications/conference/2008_IOLTS_ImhofWZ2008.pdf}
}
Elm, M., Wunderlich, H.-J., Imhof, M.E., Zoellin, C.G., Leenstra, J. and Maeding, N. Scan chain clustering for test power reduction 2008 Proc. 45th ACM/IEEE Design Automation Conference (DAC), pp. 828-833 inproceedings DOI URL PDF 
Keywords: B.8.1 [Hardware]: Performance and Reliability - Reliability, Testing and Fault-Tolerance
Abstract: An effective technique to save power during scan based test is to switch off unused scan chains. The results obtained with this method strongly depend on the mapping of scan flip-flops into scan chains, which determines how many chains can be deactivated per pattern.
In this paper, a new method to cluster flip-flops into scan chains is presented, which minimizes the power consumption during test. It is not dependent on a test set and can improve the performance of any test power reduction technique consequently. The approach does not specify any ordering inside the chains and fits seamlessly to any standard tool for scan chain integration. The application of known test power reduction techniques to the optimized scan chain configurations shows significant improvements for large industrial circuits.
BibTeX:
@inproceedings{2008_DAC_ElmWIZLM2008,
  author = {Elm, Melanie and Wunderlich, Hans-Joachim and Imhof, Michael E. and Zoellin, Christian G. and Leenstra, Jens and Maeding, Nicolas},
  title = {Scan chain clustering for test power reduction},
  booktitle = {Proc. 45th ACM/IEEE Design Automation Conference (DAC)},
  year = {2008},
  pages = {828--833},
  keywords = {B.8.1 [Hardware]: Performance and Reliability - Reliability, Testing and Fault-Tolerance},
  abstract = {An effective technique to save power during scan based test is to switch off unused scan chains. The results obtained with this method strongly depend on the mapping of scan flip-flops into scan chains, which determines how many chains can be deactivated per pattern.
In this paper, a new method to cluster flip-flops into scan chains is presented, which minimizes the power consumption during test. It is not dependent on a test set and can improve the performance of any test power reduction technique consequently. The approach does not specify any ordering inside the chains and fits seamlessly to any standard tool for scan chain integration. The application of known test power reduction techniques to the optimized scan chain configurations shows significant improvements for large industrial circuits.}, url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4555934}, doi = {http://dx.doi.org/10.1145/1391469.1391680}, file = {http://www.meimhof.de/publications/conference/2008_DAC_ElmWIZLM2008.pdf} }
Kochte, M.A., Zoellin, C.G., Imhof, M.E. and Wunderlich, H.-J. Test Set Stripping Limiting the Maximum Number of Specified Bits 2008 Proc. 4th IEEE International Symposium on Electronic Design, Test and Applications (DELTA), pp. 581-586, Best Paper Award inproceedings DOI URL PDF 
Keywords: Test relaxation, test generation, tailored ATPG
Abstract: This paper presents a technique that limits the maximum number of specified bits of any pattern in a given test set. The outlined method uses algorithms similar to ATPG, but exploits the information in the test set to quickly find test patterns with the desired properties. The resulting test sets show a significant reduction in the maximum number of specified bits in the test patterns. Furthermore, for commercial ATPG test sets even the overall number of specified bits is reduced substantially.
BibTeX:
@inproceedings{2008_DELTA_KochteZIW2008,
  author = {Kochte, Michael A. and Zoellin, Christian G. and Imhof, Michael E. and Wunderlich, Hans-Joachim},
  title = {Test Set Stripping Limiting the Maximum Number of Specified Bits},
  booktitle = {Proc. 4th IEEE International Symposium on Electronic Design, Test and Applications (DELTA)},
  year = {2008},
  pages = {581--586},
  note = {Best Paper Award},
  keywords = {Test relaxation, test generation, tailored ATPG},
  abstract = {This paper presents a technique that limits the maximum number of specified bits of any pattern in a given test set. The outlined method uses algorithms similar to ATPG, but exploits the information in the test set to quickly find test patterns with the desired properties. The resulting test sets show a significant reduction in the maximum number of specified bits in the test patterns. Furthermore, for commercial ATPG test sets even the overall number of specified bits is reduced substantially.},
  url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4459617},
  doi = {http://dx.doi.org/10.1109/DELTA.2008.64},
  file = {http://www.meimhof.de/publications/conference/2008_DELTA_KochteZIW2008.pdf}
}
Imhof, M.E., Zoellin, C.G., Wunderlich, H.-J., Maeding, N. and Leenstra, J. Scan Test Planning for Power Reduction 2007 Proc. 44th ACM/IEEE Design Automation Conference (DAC), pp. 521-526 inproceedings DOI URL PDF 
Keywords: B.8.1 [Hardware]: Performance and Reliability - Reliability, Testing and Fault-Tolerance Algorithms, Reliability
Test planning, power during test
Abstract: Many STUMPS architectures found in current chip designs allow disabling of individual scan chains for debug and diagnosis. In a recent paper it has been shown that this feature can be used for reducing the power consumption during test. Here, we present an efficient algorithm for the automated generation of a test plan that keeps fault coverage as well as test time, while significantly reducing the amount of wasted energy. A fault isolation table, which is usually used for diagnosis and debug, is employed to accurately determine scan chains that can be disabled. The algorithm was successfully applied to large industrial circuits and identifies a very large amount of excess pattern shift activity.
BibTeX:
@inproceedings{2007_DAC_ImhofZWML2007,
  author = {Imhof, Michael E. and Zoellin, Christian G. and Wunderlich, Hans-Joachim and Maeding, Nicolas and Leenstra, Jens},
  title = {Scan Test Planning for Power Reduction},
  booktitle = {Proc. 44th ACM/IEEE Design Automation Conference (DAC)},
  year = {2007},
  pages = {521--526},
  keywords = {B.8.1 [Hardware]: Performance and Reliability - Reliability, Testing and Fault-Tolerance Algorithms, Reliability
Test planning, power during test}, abstract = {Many STUMPS architectures found in current chip designs allow disabling of individual scan chains for debug and diagnosis. In a recent paper it has been shown that this feature can be used for reducing the power consumption during test. Here, we present an efficient algorithm for the automated generation of a test plan that keeps fault coverage as well as test time, while significantly reducing the amount of wasted energy. A fault isolation table, which is usually used for diagnosis and debug, is employed to accurately determine scan chains that can be disabled. The algorithm was successfully applied to large industrial circuits and identifies a very large amount of excess pattern shift activity.}, url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4261239}, doi = {http://dx.doi.org/10.1145/1278480.1278614}, file = {http://www.meimhof.de/publications/conference/2007_DAC_ImhofZWML2007.pdf} }
Imhof, M.E., Zoellin, C.G., Wunderlich, H.-J., Maeding, N. and Leenstra, J. Verlustleistungsoptimierende Testplanung zur Steigerung von Zuverlässigkeit und Ausbeute 2007 Proc. 1. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuD), pp. 69-76 inproceedings URL PDF 
Abstract: Die stark erhöhte durchschnittliche und maximale Verlustleistung während des Tests integrierter Schaltungen kann zu einer Beeinträchtigung der Ausbeute bei der Produktion sowie der Zuverlässigkeit im späteren Betrieb führen. Wir stellen eine Testplanung für Schaltungen mit parallelen Prüfpfaden vor, welche die Verlustleistung während des Tests reduziert. Die Testplanung wird auf ein Überdeckungsproblem abgebildet, das mit einem heuristischen Lösungsverfahren effizient auch für große Schaltungen gelöst werden kann. Die Effizienz des vorgestellten Verfahrens wird sowohl für die bekannten Benchmarkschaltungen als auch für große industrielle Schaltungen demonstriert.
BibTeX:
@inproceedings{2007_ZuD_ImhofZWML2007,
  author = {Imhof, Michael E. and Zoellin, Christian G. and Wunderlich, Hans-Joachim and Maeding, Nicolas and Leenstra, Jens},
  title = {Verlustleistungsoptimierende Testplanung zur Steigerung von Zuverlässigkeit und Ausbeute},
  booktitle = {Proc. 1. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuD)},
  year = {2007},
  pages = {69--76},
  abstract = {Die stark erhöhte durchschnittliche und maximale Verlustleistung während des Tests integrierter Schaltungen kann zu einer Beeinträchtigung der Ausbeute bei der Produktion sowie der Zuverlässigkeit im späteren Betrieb führen. Wir stellen eine Testplanung für Schaltungen mit parallelen Prüfpfaden vor, welche die Verlustleistung während des Tests reduziert. Die Testplanung wird auf ein Überdeckungsproblem abgebildet, das mit einem heuristischen Lösungsverfahren effizient auch für große Schaltungen gelöst werden kann. Die Effizienz des vorgestellten Verfahrens wird sowohl für die bekannten Benchmarkschaltungen als auch für große industrielle Schaltungen demonstriert.},
  url = {http://www.vde-verlag.de/proceedings-de/463023008.html},
  file = {http://www.meimhof.de/publications/conference/2007_ZuD_ImhofZWML2007.pdf}
}