Think of a state machine with a lot of states, and a special ‘deleted’ state, that can be reached directly from everywhere. Additionally the thing has an ‘undelete’ functionality, that jumps out of the ‘deleted’ state back into the state before it was deleted. If faced with such a state machine, the majority of people say things like: “oh, so many states” or “it’s so big”. In contrast, someone aware of the concept of complexity would say sth like “oh, what a nasty undelete function” or “without the undelete it would be much cheaper to implement”.
Complexity aware people are able to recognize complexity, and regard it over things like size. Why? Because, we’re in the software world. Surely, in the physical world this might be different. Speaking from experience, an analyst can proof very beneficial, if she is able to address complexity issues early in analysis. Then of course ‘addressing’ complexity can lead to anything: accepting it, postponing it, checking it, ignoring it or whatsoever. Important is, the earlier you do it the more options you have.
For detecting complexity in software analysis, modelling is an essential tool, since by definition modelling is about structure (and not about languages, in the first place, as one might think by some of the literature). In the end it’s about having an eye for structure, i.e. to say things like: “ok, mind maps look quite intuitive, but in the end it’s just a taxonomy”.
Thus creating models in analysis should always go along with awareness of their complexity issues.
notice that this is about complexity of business logic, and that technical complexity is a different issue. See therefore Part I
of course balancing business and technical complexities is far from trivial, and is the actual mission of design, as nicely described by Bertrand Meyer here