Monday, January 28, 2008

Challenges in building a design collaboration tool

The topic of our studio project is to create a much better collaboration tool enhancing the shared whiteboard concept to be more UML-aware. The UML part is relatively easy with the support from eclipse community. The whiteboard isn't.

Although there have been a whole zoo of shared whiteboard software, nobody likes them. We wish to create a better whiteboard so the designers/architects/programmers in Bosch across the world could communicate better.

The other part of the tool is to provide warning if a design change, expressed in UML, violates some rules set by the architects. It should be much more high level than the new OCL in UML, it must perform some kind of model checking.

I am not sure if it is the best topic one can get for a 16-month project, but I am happy to think about something that can help people to communicate better.

Is software architecture really more difficult?

When they talk about software architecture here in CMU/SEI, they always say that documenting software is hard because you cannot see the real thing, you cannot draw software architecture like drawing a 3D representations of a machine or bridge. That is a myth. They say that because they do not understand mechanical or civil engineering at all.
The truth is, engineers in other disciplines also need to document their designs very carefully in various ways much complex than the three orthogonal views you may have seen in a mechanical/civil engineering handbook.

You have to calculate/simulate the force flows through a beam using a force flow diagram, you have to calculate the gear structure of your gearbox using gear diagrams, and you have to calculate the flow of air/water/oil inside the system using yet another set of flow field diagrams.

You also need diagrams at different level of detail to show how to manufacture or assemble your creation to what quality level, which could be very complicated.
Software is special because it is indeed so cheap and too easy to blow up। If we isolate every module or even function with process boundaries, like a failed component in the real world usually has, the quality of software could be much better.

In the real world, architects rarely use new/customized material or structures. They only buy tested and well understood parts and assemble/construct them using well understood methods. Software is so cheap that everyone wants to invent his version of sort/database connection, etc...

The Internet, as a whole, is quite stable because all components are isolated। A blown up file server usually doesn't crash the intranet, let alone the whole Internet. If we can somehow make that level of isolation achievable in an economical manner, we can make everyday software much more stable than they are now.

Wednesday, January 03, 2007

CSS

It is mad to review CSS at this time. CSS is not bad considering the sales it has driven over the past years, and it should not be surprised if someone eventually broke it. CSS still works even for me, a lazy engineer in this field. CSS will not work, nor will AACS, to prevent professional pirates with proper tools.

Recently in the DVD Forum and related organizations, people are working hard to make"managed copy" possible for CSS protected movies. The idea is all right, but there are already better protection schemes like CPRM, VCPS, or even AACS. The major problem is there are different forces trying to create multiple solutions for this simple idea.

I cannot discuss all the movement openly, but I think it is crazy to reinvent almost everything for this old format.

Wednesday, November 15, 2006

GPL fever

In general, people in my company know very little about open-source software. Our legal department also spend a lot of time to keep people stay away from them. It caused a lot of attention when I introduced an open-source AES package in our reference firmware.

Recently, people are talking about GNU/Linux because some of our customers want us, not the department I work for, to switch from other commercial RTOS to it. My people are very worried about the reference firmware would be forced into open-source when we release it with GNU/Linux.

The license term, GPLv2 in specific, and the license model in the building of GNU/Linux created a very difficult situation for embedded system builders like us. In an low end, washing machine to cell phone, embedded system, you link everything together to create a single image as much as possible. Every part is essential in the distribution. Some contributors in GNU/Linux might think the whole package should be open-sourced. It is just not possible. We cannot afford to open up every source line we have written to our competitors.

The license model in GNU/Linux also let every contributor to file a law suite everywhere. We, as a commercial company, cannot afford that potential cost. It would be much easier if there exists some single licensing entity to negotiate with. The collective work of Linux is great, but it would be a nightmare to deal with when it comes to the legal issue.

I think it might be better if people can authorize some single entity when they contribute the code into GNU/Linux. Just like what MySQL/BSD does. It is still free and open, but we know who to pay if we, no matter what the reason is, want to embed it into a toothbrush, closesourced.

Saturday, October 21, 2006

What Do You Get From A Co-Verification Tool?

We used SoC Designer to evaluate the performance of CPU core with a pretty slow FLASH ROM chip for some new product. A suitable cache module would be sized to match the speed difference, too.
It should be a simple task to compile this system providing the new tool provides so many CPU cores, buses, cache components, and memory chips. We just had to wire them together, like in LabView or Lego. It is not too difficult, in deed, and the benchmark numbers was easily acquired using the software profile feature in the simulated CPU core.
Things got a bit nasty when the bosses wanted to see the benchmark results from yet another cheaper core. To cut a long story short, we cannot explain the enormous difference between the result from the two ought-to-be similar CPU cores.
Any pure numerical simulation project faces the same problem. We cannot say which one is more true without a real platform or really strong technical support from the vendor, especially when the behavior of pipelined and cached CPU core is quite difficult to explain. It would be easier for us if we had the source code for these components, but that was not possible.
May be the vendor should provide a large FPGA board with some preconfigured scenario before the customer know when should they "trust" the result or not. The CAE field in Civil and Mechanical Engineering has been doing this for very long time. New users learn the limit of simulation tools with experiments conducted in real world. Digital system is much easier to understand, but that doesn't mean you can get a precise result without consulting your hardware designers about the detailed description of each component on the canvas.

Tuesday, October 17, 2006

AACS

Advanced Access Content System
One of my job is to implement the security system to fullfill the AACS requirement.
AACS is the more up-to-date version of CPRM, which is not very popular. The idea is to encrypt the content using very strong encryption, and the embed the key on the disc. Only valid software plus valid drive can retrieve the key and then playback properly. If the software or drive is hacked to do anything AACS doesn't like, it is revoked.
The software can be hacked to produce perfect unencrypted AV stream. The drive can be hacked to produce or accept perfect copies.

The revokation mechanism is doomed to failure, with an open platform like our PC. You have to know who to revoke in the first place. They do have forensic mark mechanism, but I do not believe it would work, either.

The problem for AACS is the copy-protection part is too strong, and the on-line transaction part is too weakly addressed. People would have to replace their digital TVs and LCD monitors to watch HD content, but no any player, sw or hw, supports any new transaction model till today.

SoC Designer

One of my recent job is to pick a new microcontroller for our SystemOnChip, which has been using 8051-family for years. Since the ARM core is the new 8051 for SoC design, we contacted ARM for performance evaluation tool or kit. Their answer is SoC Designer, a complete software emulation of the whole SoC.

SoC is not the microcontroller/processor only. When we say System On Chip, it is really a system. We have various buses running different protocols at different clock rates. We have multiple processors, usually MCU+DSP. We have FLASH, EEPROM, DRAM and SRAM. We also have countless hardware components which work concurrently to off-load the main processors. And we are talking about a humble CD-ROM drive.

Precise and meaningful performance evaluation is really a tough job, so we just focused on the processor, cache, and FLASH subsystem.

The nice point for SoC Designer is you may model the system completely in software before the hardware is available. You may also simulate bugs or exceptions in software again and again without spending hours re-producing the situation with the real "embedded" hardware.

The problem for SoC Designer is you have to model all, or at least large part of your system before you can really evaluate the performance of it with real firmware. To make things worse, the ready-to-use model for processors, cache, bus, and memory components are not so reliable when it comes to cycle-accuracy. Without strong support, which we do not have, we cannot explain many performance differences and micro behavior of those models.

The idea is great, but we need more experience to trust this tool.