This paper outlines key aspects of a maximally informative and manageable ‘assessment topology’ for conducting an operational and/or mission assessment in a complex environment. This examines the structure and relationships between lines of operations (LOOs), measures, and indicators at various organizational levels. The proposed methodology handles the trade-off between complexity and tractability in a flexible and efficient way. The model is sufficiently generic for adaptation to a wide range of environments, as opposed to any particular mission. There are few key points that can be made on the basis of the authors’ experience with the assessment in Afghanistan. 1) The complexity of the assessment framework should reflect the complexity of the assessed environment, in particular of its organizational structure; 2) The assessment framework should reflect multiplicity of the temporal and geographical scales present in the assessed environment; 3)High-level assessments need to include more than a simple roll-up of lower-level assessments, an iterative combination of a top-down and bottom-up assessments might be required; and 4) When combining partial assessments into an overall situational picture, weighted averages should be avoided if possible, unless they are used to combine the same type of variables.
About the Author(s)
"Many of us in "middle management" are starting to hope that the fiscal train does go over the cliff so that we can clear our headquarters of all these extra staffers, reservists, contractors, etc, and the generals are forced to make real decisions. What matters? What can I do with limited resources? Do we really need all this crap or should we focus on the bare bones of what matters?"
Ahem, don't throw Reservists under the bus... The number of mobilized soldiers in non-mobilized reserve units is being reduced (probably rightfully so) and the gorilla nobody seems to speak of is the core abilities that we perform during our missions.
I submit that (in general) the reason why we have such large staffs is due to the unwieldy command of our doctrine. Why is I/O it's own section, instead of being integrated into our normal processes? Why do we need "joint" HQs? With all the supposed benefits of collaborative and parallel planning, there should be no need for large staffs.
Merry Christmas to all!
Who mischaracterize our negative reaction as knee jerk and it as far from knee jerk as you can get. We have had these assessments shoved down our throat since the beginning of this conflict and what you see as "we need to try", those of us more seasoned see as stop wasting time with non-productive processes. You want to experiment with this, do it somewhere else, we don't need any more confusion and staff distractions here.
You can see another view that relates to this at
It is nonsense to think that any level of analytical rigor can definitively and quantitatively turn any collection of data into truth in such complex situations. We simply are not as smart as we think we are and human interactions cannot be summed up into numbers the way chemical reactions and petri dish cultures can. It is not "tough to study this empirically," it is impossible and misleading. For all the rigor here, it might as well be witchcraft because human interactions on this scale are just too complex and follow logics or the lack thereof that invalidate empirical models. The data are often simply wrong because the people who are generating the data suffer from the biases you highlight. The aggregation only further distorts the picture. The answer is not to come up with a more dispassionate "scientific" system. The answer is to educate and select commanders and staffs that have a more dispassionate understanding of the world, have greater strategic context, and have less faith in the ability of science, technology, and the indomitable American spirit to radically remake the world in 180 days or less. That isn't happening anytime soon. Until that happens, all the data in the world won't matter anyway.
I congratulate the authors on this very thoughtful and thought-provoking article. I'm not sure I'm completely on board with all of their arguments and assumptions, but this certainly represents a productive addition to the discourse. In any case, I'm glad someone much smarter than me is taking a hard look at how to improve our operational assessments process in Afghanistan. Right now, that process is broken.
It's discouraging to see that there still seems to be such a knee-jerk reaction against empirical analysis of progress in complex environments. The constant refrain seems to be "it's tough to study this empirically, therefore we shouldn't try." You know what else is difficult? Invading a large Central Asian nation and attempting to re-engineer an entire society while simultaneously fighting an amorphous group of xenophobes and religious extremists. Given such a complex task, I'm baffled at the expectation that assessing progress could be achieved without a tremendous amount of time, energy, and analytic rigor. Of course the opinions and intuition of the tactical-level battlespace owners should be an input to the assessment process, but this can't be THE answer. As the authors discuss, tactical military echelons often lack the strategic context to understand the relationship between the local dynamics they observe and the broader dynamics playing out at the regional and national levels. This is without even getting in to the question of biases. How many times have we seen commander's assessments to the effect that "things were horrible when we got here, but now that we're on our way out, everything is just fabulous." It's human nature to want one's performance to be viewed in a favorable light, to find validation in one's own efforts and sacrifice, and to do the same on behalf of the troops under one's charge.
The authors are struggling with the assessment process, and those of us who are or were in the field are struggling to comprehend why we even need these assessments to begin with. Commanders would be better informed if they would get out and talk to their troops living among the locals and talking to the locals to get a meaningful understanding of what is important, and what needs to be done. I'm sure these assessments have the potential to occassionally identify an issue that we may have missed otherwise, but they are largely distractions from what is really important for commanders and the staff understand. I think we'll eventually get to the point where the soldiers on point are fighting a different war than the commander and staff leading, supporting, and planning the overall effort. Just hope in the end these assessments are nothing more than another voice in the collective input the commander is adding to his personal assessment. Just because the assessment indicates we may be red in a particular doesn't mean that area is important. To some extent I think the authors said that. We all know commanders hunger for these assessments and the associated powerpoint slides. Eye candy has replaced real analysis and understanding.
Peter- agree. I cringe everytime I see the hubris of some ORSA types. They remind me of Fundamental religious-folk: they are so convinced they are right there is no way of even holding a discussion with them. A buddy of mine in Kabul faces the retort: "you are using anecdotes" every time he brings up an issue with their logic. I thought McNamara's "whiz kids" ways of doing business were rejected by McMaster and his Dereliction of Duty, among other things...
"The complexity of the assessment framework should reflect the complexity of the assessed environment, in particular of its organizational structure."
That reminds me of the issue with models- they can't be as complex as what they are trying to model- therefore problematic when used as more than just one tool of many. Is it really useful to make your assessment framework complex- or even possible? (wait, don't answer that second question.) I think assessments are tools, but, unfortunately we have tied them into a system of how to approach problems- a philosophy really- and this philosophy determines how we do things- no matter what we're faced with. But, by their nature- complex environments render causal logic impossible. When I see "LOOS" applied to a complex environment I have to just shake my head.
Figure 8 pretty much says it all. What are we doing? I mean, what in the world are we doing? This is reductio ad absurdum. I get it. I get what they're trying to do and conceptually, I agree with the broad idea. You can't just nest assessments and compare apples and oranges and weight apples minus double weight oranges, or pomegranates in another district and figs in yet another, to get a fruit score. I didn't read it closely enough to know if I agree or disagree with the details, but to me it is just fine as a piece of abstract scholarship. But when I look at the amount of effort that went into this, and the amount of effort it would take to implement such things, and the amount of money that people and things like this cost, that these are efforts and resources misspent. No matter how much rigor you put into analysis, Afghanistan is not a lab and humans are nowhere near smart enough to definitively quantify complex socio-economic and political interactions. That, plus these days we spend so much time trying to define fancy "LOOs" and fancy assessment processes and data pulls, that we're drifting off into la-la land, especially when, at the end of the day, harried action officers often give bogus information. How did the interaction go? It was great. Score it a 5. Many of us in "middle management" are starting to hope that the fiscal train does go over the cliff so that we can clear our headquarters of all these extra staffers, reservists, contractors, etc, and the generals are forced to make real decisions. What matters? What can I do with limited resources? Do we really need all this crap or should we focus on the bare bones of what matters? Unfortunately, it will be a while after the money dries up before the good ideas are finally pared from the tree of stupidity.