Monday, June 4, 2012

A Value Centric Approach to Modularization

I am happy to announce that my Master's thesis has been accepted and is available for all.  While there is an official abstract within the thesis, in summary I was trying to find a way to help design engineers focus system modularization efforts.  The goal would be to only create modules in areas of systems that are likely to need to change to be value robust.  Further, I wanted to provide a way to check and make sure that the target system modularization had actually been achieved.

Feedback or thoughts on follow-up research are always welcome.

Monday, February 13, 2012

Prototypes, ambiguity, and tradespace exploration

I ran across an HBR story about how single prototypes that happen early in the system or product development cycle can project outcomes.  This is reported to happen because the team becomes focused on the single prototype instead of remaining in the problem space for a longer period of time.

Tradespace exploration, and in particular tradespace exploration rooted in what is valuable to the acquirer or customer, could be one counter measure to this team dynamic.  By exploring hundreds or thousands of designs parametrically is one way to avoid settling on optimizing one solution too early in the development process.  Further, by mapping to stakeholder utility it keeps the development team focused in the customers problem domain.

May particular research in this area is two fold:
1.  Use of multi-attribute tradespace exploration for systems solutions in the competitive commercial space with many sellers competing for many consumers (the current research has been primarily in the context of a monopsony, such as the U.S. Department of Defense or NASA).
2.  The use of tradespace exploration to as a value centric approach to target system modularization efforts.

As I get articles out on each of these areas I'll provide additional pointers.

Thursday, October 13, 2011

Norms of Validation and the Agile Community

Had a great dinner with Mike Cottmeyer tonight in Bean Town.  At one point we got on the subject of the nature of research and validation expectations.  I was reminded that someone told me that different sciences have different validation and correlation expectations because they will reject the null hypothesis.

In high energy physics, the expectation is for the coefficient of determination to be at least 99.9999%.
Depending on which engineering community, 95-99% or greater is expected.
In the social sciences, it is not unusual to accept R2 values of 30-50% as "good enough".  Human interactions are considered so complex as that when R2 values are >90% are one has to validate you aren't just measuring the same construct in different ways.

These communities have other norms.  A couple norms in these communities are that research must be falsifiable and replicable.  Another typical norm is that another minimum level of validation is peer review by people with PhD's and publication in a journal associated with the discipline.

These are the norms these communities.  We can argue about them, we can even argue if these communities hold themselves to these norms all the time.

It struck that my observation is that the agile community has their own set of norms for considering work valid and something to be built upon.  I would argue that the agile community currently has validation norms well below those of the physical sciences, engineering, and social sciences.  Sometimes a poorly done case study (by the standards of the social sciences) or a claim that is published in book form (but not peered reviewed) is sufficient for validation in this community.  Sometimes it is just a blog post by a well paid consultant.

As agile software development enters these other communities (such as the engineering systems community) the agile community shouldn't be surprised if they are expected to reach new levels of validation before their findings are accepted.

This is a great opportunity for research.  Agile is relatively new and now there should be enough cases out their to start and build serious research upon.

Thursday, May 5, 2011

Agilist strike again

Continuing on my critique of research in agile methods, started with this (relatively) popular post on integrated concurrent engineering compared to agile software methods.,.

I ran across this post by Dean Leffingwell and this post by Chad Holdorff, extolling the "proper" mixture of component vs. feature teams (definitions included in these posts, somewhat) on a project.  Problem with this post is that there are not studies showing correlation with the two variables on the graph from Dean, the proposed mixture of component and feature teams, and the productivity and/or quality of output of the teams that do and do not follow this proposal.  Additionally, there is no notion of context: why types of systems does this model work for? where does it break down? At best, this mixture is simply a hypothesis that needs to be tested through proper study.  They should be marked with "notional" and for illustrative purposes only (i.e., they are fiction).

Now compare the notional mixture of structuring teams with the work of using tools like Design Structure Matrix to organize and improve global product development.  A good starting point would be Steve Eppinger's paper, A Model-Based Method for Organizing Tasks in Product Development.  Here we have a model, formalized, that has been applied and the outcome studied for productivity changes.  There are other, dare I say, mature ways to manage dependencies.

The agile crowd needs to move past heuristics and case studies, and no notion of context, to describe what works and what doesn't work.  Further, the definition of work needs include a notion of output quality (and not just absence of defects).  There is are better ways than the simple heuristics and one off case studies.

Wednesday, May 4, 2011

System Architecture Principle 8: Beware of software

Tagline: Beware of software.

Descriptive version: Software grows very complicated very quickly.  It can be of high leverage, but can be potentially dangerous because of how complicated it can become.  Further, software does not have a "laws of physics" like physical systems making it difficult to reason about using models and such (the code is the model!).

Prescriptive version: When deciding to allocate functions to software, be aware that you are substantially increasing the internal complexity and number of operating modes of your system.

Discussion: I am not sure if this is an architecture principle, yet.  I am certain it will be one with enough time.  Certainly of my principles it is the one that has the shortest lifespan. 

Software is this very new thing that can quickly grow substantially more complicated than the physical system in which it operates.  The number of operating modes of software is estimated as the number of inputs times the number of outputs to some power (Ayaswal2007).  Further, software seems to be having a tendency to creep from being something embedded in some component helping that component deliver its functions to an item that becomes a system bus interacting with just about every component in the system.  On top of all of this, the work of software is largely hidden in complicated code in bits on software developer's computers making it even harder to reason about.

Citation
B. K. Ayaswal and P. C. Patton. Design for Trustworthy Software. Prentice Hall, 2007.

Tuesday, May 3, 2011

System Architecture Principle 7: You can't escape the laws of physics

Tagline: You can't escape the laws of physics (Augustine1996).

Descriptive version: No amount of being clever will allow your system to violate the laws of physics.

Prescriptive version: You can't change the laws of physics; use them, obey them, but don't think for a moment that as an engineer or architect that you can escape them. 

Discussion: This is one of the principles which reflects my background, with my undergraduate degree being in physics.  I spent the better part of five years (I started in graduate school and decided it was not for me at the time) studying how the universe worked, the models that explained why things happened all around us.  As you know, this attempt to understand the universe is very much related to engineering, but in some ways very different than engineering.  Engineering seems to be more about, given the set of laws that govern the workings of the universe, how can we (engineers) leverage them to affect the world around us.  It is very tempting to confuse this ability to "engineer" the world with the ability to "engineer" the laws of physics.  Falling into this confusion would likely lead to very undesirable consequences.

Citation
N. R. Augustine. 1996 Woodru ff Distinguished Lecture Transcript. http://sunnyday.mit.edu/16.355/Augustine.htm, 1996. section title Conceptual Brilliance Doesnt Blind the Laws of Physics.

Monday, May 2, 2011

System Architecture Principle 6: System design drives life cycle costs

Tagline: System design drives life cycle costs (Blanchard2011) (Crawley2010).

Descriptive version: The systems architecture, and hence the work of the architect, has the largest impact on system life cycle costs.

Prescriptive version: The work of the system architect can drive significant changes in the excepted life cycle costs of the end product.  It is important for the architect to understand the life cycle costs constraints (based on perceived value of the acquiring organization and the desired financial margins of the supplying organization) and to make sure the system architecture supports fits within the targets.

Discussion: The principle is support by the notion that highest management leverage is at the very beginning of the project, when the least amount of money has been committed to the project.  That is, the very first steps, deciding on the system architecture, is the biggest opportunity to steer a project to a desired life cycle costs.

Citations
B. S. Blanchard and W. J. Fabrycky. Systems Engineering and Analysis. Prentice Hall, 2011.

E. Crawley. Esd.34 lecture 1, September 2010.