In the previous post “Growth and Scaling Downfalls – Part III” we discussed strategy aspects of a scaling project. The next topic on the scaling preparation “to do” list is measuring success and failure.
Scaling and growth both depend a great deal on experimentation: be it at tactical level deciding who will do what to strategic level defining success or failure. That being said that kind of decision making naturally requires a great deal of analysis; qualitative or quantitative.
Data driven quantitative analysis is or should be the basis of virtually all business decisions. Though an established field, the quantity of data that has been previously inaccessible or impractical for usage has changed the field. The same quantity of the data sets that are now available have also created several other side effects for small and mid-size organizations; ranging from increased cost for proper analysis to “analysis paralysis”. Hence, the usage has to be defined in terms of practicality: both the collection and analysis of data have to be defined within the context of cost and impact.
In a previous discussion about decision making we discussed the usage of qualitative decision making. Those parameters previously discussed i.e. strong pattern recognition as part of the qualitative decision making are particularly applicable when it comes to growth and scaling. In practical terms it translates to a combination of using practical experiences both industry related as well as general business experiences to decide on both tactical and strategic level: the industry know-how combined with generic business experience will provide the sort of “umbrella” coverage that will leave little room for “guessing”.
On the front line
Interestingly enough there are some unique aspects to data usage when it comes to scale and growth: though the basic methodology of collection and analysis is the same, the decision making direction should entail a more dynamic version of “bottom to top” or “top to bottom”: Micro decisions vs. Macro decisions: