Thursday, October 22, 2009

What is a defect?

In a software shop, we used to think of defect as software bugs. Those bugs you can blame the poor lowly coders. Fixing of defects only related to code changes.


If we take a broader view of it, defects can reside in many places. If your requirement failed to catch customer needs, there is a defect. If your design does not fulfill the entire requirement, there is a defect in design. Similarly, there could be defects in other phases or documents, according to your process.


How can we look back and say, “There is a defect in design stage”? We need some evidence of design, some architecture document or design spec. For the same reason, we would need a requirement document.


This could be a little bit heavier than we would like, and many agile processes do not encourage people to do documents. However, without any evidence of design, we will never know what happened in design-related activities, and what we can do to improve it.


Divide-and-conquer strategy is used a lot in software engineering. If we can cut the long, complex process of software production into pieces, we have a better chance of debugging each piece. For example, how do we know our design review is working or not? The answer could be: by seeing how many design-related defects are still reported down the line.

Introduction to scrum (software development)

Scrum is a lightweight, or agile, process for project management. Due to the simplicity, it’s quite popular among software teams. It’s so lightweight that you can very easily say “we are doing scrum” when asked by the management. However, it may not help you much if your team is not disciplined to start with.


Scrum is simple, as described here (wiki). You just need to remember three things:


1) Sprint: short, usually a month, time frame for project review and deciding what to do in next sprint


2) Backlog: the list of task items, plus estimations, you have to solve for the whole project, as well as this sprint


3) Daily scrum meeting: 15min. max, daily status report of the team


See, you just need a list of tasks with estimated effort attached, chop them into monthly milestones, and hold a meeting everyday to collect status. Sounds simple, isn’t it?


Scrum even provide a template for the daily meeting. Everyone on the team has to answer three questions:


1) What have you done since yesterday?


2) What are you planning to do by today?


3) Do you have any problems preventing you from accomplishing your goal?


It’s really not that simple when you think again. How do I know and decompose the tasks need to be done for the project? How do I estimate them? How do I pick what items to do in the next sprint? What if the estimation is off by 2x, 3x, or 10x?


Creating software architecture could help you decompose the tasks; timesheet and function point can help you estimate for effort; Risk management should be able to help in picking the right tasks; If the estimation is off by too much, it might be better to start re-negotiation with your client or management early in the project. All these need discipline to generate reliable outcome.


Even you don’t do anything like those mentioned above, a short daily meeting with your team members could still reveal the situation early, i.e., increase visibility of your project.


Quality, Fitness, and Quality Attributes

We always say we shall provide high quality product to our customers. What does quality mean, anyway?


If you ask a programmer, I bet the answer would be “no bugs.” Is that it? Would you please define “bug”? Say, if I deliver a stone to you, and tell you there is not a single bug in it, does that count as high-quality software?


Quality is a vague concept, and could be better defined as “fitness to purpose.” If the software doesn’t help our customers achieve their purpose, how bug-free doesn’t even matter.


What is our customers’ purpose? Selling the phones, yes, but through right functionality, smooth execution, and robust performance. The functionality, or feature set, can be elicited through several proven methods, and we might discuss that later. The other two lead us to “Quality Attributes.”


Quality Attributes cover all those vague items we usually use to describe a software system. Some of the common ones are listed here:





  • Extensibility: how difficult it is to extend the feature set


  • Maintainability: how difficult it is to fix a bug in it


  • Performance: how fast it is


  • Robustness: how strong it is when abused


  • Usability: how easy it is to use the system


  • Capacity: how many requests it can handle


One major goal of requirement elicitation phase of the project would be to nail down use cases for these “-ilities.” For example, usability can be defined as how much time or how many key strokes it would take a novice user to accomplish a certain task.


How important these Quality Attributes, or the requirements describing them, are? I’d say very important, because our architecture would be decided by them.

Faster, Better, Cheaper: Pick Two (1)

This is (the first part of) my submission to this year's QTech

Introduction to Software Requirements and Architecture

1.Introduction

In software development, we have this unsolvable problem of pursuing shorter development time (faster), fitter to the ever changing needs (better), and lower development cost (cheaper). It has been said that you can only pick two from these three. By hiring more people, you would end up with better quality and shorter development time for the price of higher cost.
That is only the theory. Most of the time, managers don’t even have sound evidence on what their decisions may lead to. For example, spending more money won’t necessarily give out better product if the requirements fail to address the customers’ current and future needs properly. Similarly, extending development time may not yield less buggy software if the design is flawed or you team members are stressed by too many concurrent tasks.
A common misunderstanding of software engineering is it’s all about process. It’s like if we follow some heavy process then all our quality or schedule issues will go away.
No, not like that. Software engineering is about continuously optimizing the things we do, and providing evidence of that, according to our or our customers’ needs. There is neither perfect process nor framework to solve everyone’s problem. The only thing that matters is how we systematically measure and improve what we are doing, from team management, requirement acquisition, design/test methodology, project/risk management, to quality control. Everything happens in the lifecycle of a software-intensive system have been a topic of interest, and we might find something useful from the researches.
Among all the major topics of software engineering, requirements and architecture might appear to be least useful to firmware developers. The common response is our requirements have been decided by the operating system or client, and our design is really simple. However, I oftentimes see very old code which was designed years back. Further extensions and customer base make it more and more difficult the change. If a design is going to live for more than five years or fifteen products, it might worth the time to think carefully before we first put it there.
While coding convention and code review [2] oversee the quality of implementation, software architecture, or high-level design as some reading might call it, is the key how the system can be maintained and extended. Also, many other implicit considerations like robustness or security cannot be expressed at code or API level.
In this paper, I try to show how requirements and architecture might help firmware development in the sense of improved communication cross the whole life cycle of development. Most of the references come from the Carnegie Mellon University Software Engineering Institute, or the SEI, which is famous for their contribution in CMM/CMMI [1].

10. REFERENCES
[1] http://www.sei.cmu.edu/cmmi/
[2] Go home early: How to improve code quality while reducing development and maintenance time. QTech forum 2007
[3] Paul Clements, Felix Bachmann, Len Bass, David Garlan, James Ivers, Reed Little, Robert Nord, and Judith Stafford, Documenting Software Architectures: Views and Beyond, http://www.sei.cmu.edu/publications/books/engineering/documenting-sw-arch.html
[4] Attribute-Driven Design (ADD), Version 2.0, http://www.sei.cmu.edu/publications/documents/06.reports/06tr023.html
[5] Quality Attribute Workshops (QAWs), Third Edition, http://www.sei.cmu.edu/publications/documents/03.reports/03tr016.html
[6] Len Bass, Paul Clements, and Rick Kazman, Software Architecture in Practice, Second Edition, http://www.sei.cmu.edu/publications/books/engineering/sw-arch-practice-second-edition.html
[7] Architecture Documentation, Views and Beyond (V&B) Approach http://www.sei.cmu.edu/architecture/arch_doc.html
[8] Frank J. van der Linden, Klaus Schmid, and Eelco Rommes , Software Product Lines in Action

How do you know?

Everything I learned from that 16-month program of software engineering I attended can be summerized into one question: "how do you know?"

From beginning of the 16-month project, we have to answer this question for God knows how many times in public, under ruthless critics from software researchers and professors.

  1. This is the problem our client needs us to solve. (How do you know?)

  2. We will finish requirement elicitation in Nov. (How do you know?)

  3. These are the requirements.(How do you know?)

  4. This design could satisfy our customer's needs. (How do you know?)

  5. Our team dynamics is good and improving. (How do you know?)

  6. Our meetings are efficient. (How do you know?)

  7. CMMI/Agile/Scrum/TSP process will lead us through this chaos. (How do you know?)

  8. Our client is usually happy about our results. (How do you know?)

  9. We spent too much overhead in meetings. (How do you know?)

  10. The main risks are X, Y, and Z. (How do you know?)

  11. Our estimation and plan makes sense. (How do you know?)

  12. The quality of our software is good. (How do you know?)


One of the major goals of software engineering, not programming skill or computer science, is to provide evidence and tool for management.

Visibility is something we need but seldom have in a software project. Like my favorite professor always said: if you cannot tell where you are, a map is not going to help you much. If you don't know how far you are to the destination, how do you know when you are gonna finish?

Software architecture is another major pillar of software engineering. One important use of architecture is to facilitate requirement ellicitation and project management. We can talk about that later.

If you are also interested in these topics, please leave a message. At least I would know somebody is reading :-)

Tuesday, February 03, 2009

I could not get a car loan

Everyone tells me I should take out a car loan to build up credit in the US. They didn't tell me it's hard to get a car loan without credit history first. I already agreed with the ridiculously high interest rate of 22%, but still the financial specialist of the car dealer couldn't find a banker for me.

My wife and I didn't like the specialist in Mossy Nissan because of her high pressure style of selling, so we went out and try ourselves. Since Bank of America didn't even want to issue a credit card to me, we skipped that one. We walked into a big local banking institution named San Diego County Credit Union. The specialists there were very nice to us and they said the interest rate should be much lower than 15% even for first buyers like us. However, they said they could not even start the application process without a formal Driver License. Since the nearest date I could book for the road test was Feb 3rd, there was no hope I could get the car loan from them in time.

In the end, I just went to the car dealer and wrote a check to pay it off. I hope I don't have to buy anything from Mossy Nissan ever again.

The 2nd intelligent remote and key

Nissan Sentra has a nice keyless feature, which allows you to unlock the car and turn on the engine without drawing your key from your pocket. The "key" actually consists of a remote controller and an embedded metal key in it. The metal key is considered a backup solution in case the remote fails.

When we bought this used Sentra from Mossy Nissan, they only gave us one of this key set. They never told us there should be TWO intelligent key sets for each car. I called them today and ask about it. The answer was exactly what I expected, "it is sold as is." The price for another intelligent key set is more than USD250.

Don't go to Mossy Nissan unprepared. They will eat you alive…

Thursday, March 27, 2008

Attribute Driven Design

One of our mentors in the studio project, Felix, is an expert in Attribute Driven Design. He suggested us to use this technique, and we found it useful for the initial point.

The SEI's view of software architecture is, every architecture can do any function requirement, so you need those quality attributes to find the right design. The quality attributes are traditionally called non-functional requirements, but SEI people hate that name. First, we have to define some scenarios for each quality attribute the stakeholders care. For example, we may want the system to be "high performance" in the sense it can handle 1000 requests per second. This simple scenario becomes a limiting factor you need to consider when you design the architecture, and SEI provides some very general techniques or architecture patterns for high performance designs.

The idea is to pick several most important quality attribute scenarios, make up a design to fulfill for each of them, and then try to combine them together. After this process, you have your first-cut of the architecture. The next step is to further enrich it so it supports other minor quality attributes as well.

Monday, January 28, 2008

Challenges in building a design collaboration tool

The topic of our studio project is to create a much better collaboration tool enhancing the shared whiteboard concept to be more UML-aware. The UML part is relatively easy with the support from eclipse community. The whiteboard isn't.

Although there have been a whole zoo of shared whiteboard software, nobody likes them. We wish to create a better whiteboard so the designers/architects/programmers in Bosch across the world could communicate better.

The other part of the tool is to provide warning if a design change, expressed in UML, violates some rules set by the architects. It should be much more high level than the new OCL in UML, it must perform some kind of model checking.

I am not sure if it is the best topic one can get for a 16-month project, but I am happy to think about something that can help people to communicate better.

Is software architecture really more difficult?

When they talk about software architecture here in CMU/SEI, they always say that documenting software is hard because you cannot see the real thing, you cannot draw software architecture like drawing a 3D representations of a machine or bridge. That is a myth. They say that because they do not understand mechanical or civil engineering at all.
The truth is, engineers in other disciplines also need to document their designs very carefully in various ways much complex than the three orthogonal views you may have seen in a mechanical/civil engineering handbook.

You have to calculate/simulate the force flows through a beam using a force flow diagram, you have to calculate the gear structure of your gearbox using gear diagrams, and you have to calculate the flow of air/water/oil inside the system using yet another set of flow field diagrams.

You also need diagrams at different level of detail to show how to manufacture or assemble your creation to what quality level, which could be very complicated.
Software is special because it is indeed so cheap and too easy to blow up। If we isolate every module or even function with process boundaries, like a failed component in the real world usually has, the quality of software could be much better.

In the real world, architects rarely use new/customized material or structures. They only buy tested and well understood parts and assemble/construct them using well understood methods. Software is so cheap that everyone wants to invent his version of sort/database connection, etc...

The Internet, as a whole, is quite stable because all components are isolated। A blown up file server usually doesn't crash the intranet, let alone the whole Internet. If we can somehow make that level of isolation achievable in an economical manner, we can make everyday software much more stable than they are now.

Wednesday, January 03, 2007

CSS

It is mad to review CSS at this time. CSS is not bad considering the sales it has driven over the past years, and it should not be surprised if someone eventually broke it. CSS still works even for me, a lazy engineer in this field. CSS will not work, nor will AACS, to prevent professional pirates with proper tools.

Recently in the DVD Forum and related organizations, people are working hard to make"managed copy" possible for CSS protected movies. The idea is all right, but there are already better protection schemes like CPRM, VCPS, or even AACS. The major problem is there are different forces trying to create multiple solutions for this simple idea.

I cannot discuss all the movement openly, but I think it is crazy to reinvent almost everything for this old format.

Wednesday, November 15, 2006

GPL fever

In general, people in my company know very little about open-source software. Our legal department also spend a lot of time to keep people stay away from them. It caused a lot of attention when I introduced an open-source AES package in our reference firmware.

Recently, people are talking about GNU/Linux because some of our customers want us, not the department I work for, to switch from other commercial RTOS to it. My people are very worried about the reference firmware would be forced into open-source when we release it with GNU/Linux.

The license term, GPLv2 in specific, and the license model in the building of GNU/Linux created a very difficult situation for embedded system builders like us. In an low end, washing machine to cell phone, embedded system, you link everything together to create a single image as much as possible. Every part is essential in the distribution. Some contributors in GNU/Linux might think the whole package should be open-sourced. It is just not possible. We cannot afford to open up every source line we have written to our competitors.

The license model in GNU/Linux also let every contributor to file a law suite everywhere. We, as a commercial company, cannot afford that potential cost. It would be much easier if there exists some single licensing entity to negotiate with. The collective work of Linux is great, but it would be a nightmare to deal with when it comes to the legal issue.

I think it might be better if people can authorize some single entity when they contribute the code into GNU/Linux. Just like what MySQL/BSD does. It is still free and open, but we know who to pay if we, no matter what the reason is, want to embed it into a toothbrush, closesourced.

Saturday, October 21, 2006

What Do You Get From A Co-Verification Tool?

We used SoC Designer to evaluate the performance of CPU core with a pretty slow FLASH ROM chip for some new product. A suitable cache module would be sized to match the speed difference, too.
It should be a simple task to compile this system providing the new tool provides so many CPU cores, buses, cache components, and memory chips. We just had to wire them together, like in LabView or Lego. It is not too difficult, in deed, and the benchmark numbers was easily acquired using the software profile feature in the simulated CPU core.
Things got a bit nasty when the bosses wanted to see the benchmark results from yet another cheaper core. To cut a long story short, we cannot explain the enormous difference between the result from the two ought-to-be similar CPU cores.
Any pure numerical simulation project faces the same problem. We cannot say which one is more true without a real platform or really strong technical support from the vendor, especially when the behavior of pipelined and cached CPU core is quite difficult to explain. It would be easier for us if we had the source code for these components, but that was not possible.
May be the vendor should provide a large FPGA board with some preconfigured scenario before the customer know when should they "trust" the result or not. The CAE field in Civil and Mechanical Engineering has been doing this for very long time. New users learn the limit of simulation tools with experiments conducted in real world. Digital system is much easier to understand, but that doesn't mean you can get a precise result without consulting your hardware designers about the detailed description of each component on the canvas.

Tuesday, October 17, 2006

AACS

Advanced Access Content System
One of my job is to implement the security system to fullfill the AACS requirement.
AACS is the more up-to-date version of CPRM, which is not very popular. The idea is to encrypt the content using very strong encryption, and the embed the key on the disc. Only valid software plus valid drive can retrieve the key and then playback properly. If the software or drive is hacked to do anything AACS doesn't like, it is revoked.
The software can be hacked to produce perfect unencrypted AV stream. The drive can be hacked to produce or accept perfect copies.

The revokation mechanism is doomed to failure, with an open platform like our PC. You have to know who to revoke in the first place. They do have forensic mark mechanism, but I do not believe it would work, either.

The problem for AACS is the copy-protection part is too strong, and the on-line transaction part is too weakly addressed. People would have to replace their digital TVs and LCD monitors to watch HD content, but no any player, sw or hw, supports any new transaction model till today.

SoC Designer

One of my recent job is to pick a new microcontroller for our SystemOnChip, which has been using 8051-family for years. Since the ARM core is the new 8051 for SoC design, we contacted ARM for performance evaluation tool or kit. Their answer is SoC Designer, a complete software emulation of the whole SoC.

SoC is not the microcontroller/processor only. When we say System On Chip, it is really a system. We have various buses running different protocols at different clock rates. We have multiple processors, usually MCU+DSP. We have FLASH, EEPROM, DRAM and SRAM. We also have countless hardware components which work concurrently to off-load the main processors. And we are talking about a humble CD-ROM drive.

Precise and meaningful performance evaluation is really a tough job, so we just focused on the processor, cache, and FLASH subsystem.

The nice point for SoC Designer is you may model the system completely in software before the hardware is available. You may also simulate bugs or exceptions in software again and again without spending hours re-producing the situation with the real "embedded" hardware.

The problem for SoC Designer is you have to model all, or at least large part of your system before you can really evaluate the performance of it with real firmware. To make things worse, the ready-to-use model for processors, cache, bus, and memory components are not so reliable when it comes to cycle-accuracy. Without strong support, which we do not have, we cannot explain many performance differences and micro behavior of those models.

The idea is great, but we need more experience to trust this tool.