Business Capability Model

Current group I am working in is responsible for functional architecture. In spite of the fact that I don’t have any practical experience, I have been asked to help define a practice in Business Capability Modeling.

I think the reason for that is that I have some practical insight into the requirements that functional architecture or functional systems design places on a business capability model.

The most core principle of functional architecture involves the semantics of units of work. In fact business capability modeling is about defining the semantics of units of work – so there is my connection point.

Business capability modeling, for me appears to be about defining the semantics of the units of work that are accomplished within the organization, and expressing how they are connected to each other to in chains and gangs to form the basis for generating business value. These chains and gangs of capabilities form products and product lines so that our business strategy can be expressed as actions or impacts on our capability model.

Strategic Benefit

The value of the capability model from the strategic perspective, is that we have an information model to reason over as we propose and adjust strategy to understand how the organization will/should react to execute that strategy. Capability informs capacity, such that growth is constrained by lack of capacity to perform some units of work within a business capability.

Functional Benefit

The value of the capability model from the functional architecture perspective is that we have an information model that we can use to align our technology investment against to assess the impact of proposed investment and we can adjust or tune that investment to achieve the desired impact.

Organizational Benefit

The value of the capability model from the organization design perspective is that we can align resource and leadership models to maximize throughput or minimize risk. Staffing and organization design become less arbitrary, less emotional. If I am following the requisite organization model, then I can measure the time span required for units of work, and ensure that we hire resources of adequate capacity to be effective.

Posted in IT Strategy Tagged with: , , ,

Learn To Code – Now

I recently spent some time working my way through “Learn Python The Hard Way” by Zed A. Shaw. Zed is a programmer who has accomplished more than most in his short time on Earth. He is outspoken and often edgy, and has a reputation for being both brilliant and blunt. Zed is the creator of the Mongrel server engine that powers many Ruby on Rails sites.

Zed comes off as a Hard Ass, more than anything, and his proposed methodology to learn programming is hard, as in hard assed, not hard as in difficult. Learn Python The Hard Way is old school. Which is good, because I am old. It reminds me of learning Fortran in my freshman year of college in 1980. Hollerith cards. 039 keypunch machines. All batch processing. When you are dealing with “physical” cards, and physical sorting of program steps, and waiting an hour to see if your code compiled, let alone executed to completion or got a correct answer you tend to do alot more “desk checking” than we do today. That is the thing that I like about LPTHW is that it teaches some technique around old school desk checking. Like reading your code backwards to find errors, something that we often did on green bar paper at a table at Helmut’s Alpine Kitchen at two o’clock in the morning with a pot of coffee and an order of biscuits and gravy.

Zed recommends that you use a text editor and not an IDE (in his mind the difference is that you can compile and run your code without leaving the ide) – so that you learn to use the command line to run your code. He starts by having you copy his code verbatim and get it to run, so that you understand how picky the interpreter is about syntax. LPTHW does and excellent job of layering topics, so that it builds up from basic concepts to some rather advanced concepts without overwhelming the learner.

Each lesson has some extra research or exploration or experimentation assigned to go beyond the basic exercise that he walks through. I recently worked with a new programmer who was using this to jump start a possible career change. He made it through in about 8 weeks and didn’t really get confused until he got to the object oriented concepts in lesson 43 or so.

Like any other learning endeavor, I would highly recommend that you seek a mentor or a coach to help you when you get stuck. Sometimes all you need is a sounding board – so that in describing your problem you can see the solutions, other times you may need a hint about other ways to think about the problem that might yield a solution. Having someone experienced or knowledgeable to talk with is always a good idea when learning something new.

The other thing is to adopt the beginner’s mind – don’t assume that you know or understand. Don’t get stuck because of what you have always thought. I think that LPTHW does a pretty good job of forcing you to adopt this mindset.

I also think that Python is a great first language. So if you want to learn code, and you want to dabble in programming, especially if you think that might be a career option for you, pay the money for this book. Find a mentor or coach, and work through the lessons. I pretty much guarantee that you will learn something about programming – even if it is that you don’t like it as much as you thought.

Posted in Culture Tagged with: , , , ,

Learn To Code – Languages

Its 2014, almost 2015 and conventional wisdom about computers and programming have changed dramatically in the last 30 years since I graduated college. The number of people who use computers have changed from 10% to 90% in that period of time. My Google Nexus phone has way more memory and compute power than the mainframe I learned programming on in college. The PC that I bought in 1986 had a 20 megabyte hard drive – that would hold about 10 images shot on the camera embedded in my phone, or one shot in raw mode on my DSLR.

In the 1950’s into the 1970’s, computers were physically large, occupying large rooms and requiring many attendants or operators to manage. In the 1970’s the microchip or integrated circuit technology allowed computers to be built that would fit on a desk. Now we all need a laptop, a tablet and a phone and maybe a watch or a pair of glasses that are all computers of some kind. We have computers in our cars, smart homes. All our video gaming consoles are just computers.

Conventional wisdom which 30 years ago saw that computer programming was a highly specialized skill, now sees that everyone should learn how to code, even if they don’t do it very often. This is because as computers become more ubiquitous, we need to understand them – the same way that every should know basic auto maintenance like changing the oil or mounting the spare tire when you get a flat. The same way we know how to unclog a toilet or sink drain or oil squeaky hinges in our home. Computers are so much a part of every day life that we need to understand more about how they work.

So lets just accept the conventional wisdom for a moment. What does learning to code mean? What is code exactly and how does one learn it?

What is code?

Code is a means of expressing instructions that you want the computer to “do” or execute. Like asking your butler to answer the door. When people talk about code today, they usually mean writing those instructions in a “programming language”. Writing a program is the equivalent of teaching your dog to roll over on command. Once she has learned the instruction and response, you say “roll over” and she obeys. So semantically, programming is more like training the computer to do a task. Except that once you have written the program you can give it to any “compatible” computer and that computer will be “trained” as well. In this analogy, though, we would also have to learn to speak “dog”. Computers can only be trained in a language that they speak.

What language do computers speak?

Natively, computers speak in the instruction set that they were designed to execute. Originally, each type of computer spoke a unique language aligned with the design of the processor. This was problematic for programmers, so programmers decided to create new languages and train computers to understand them.

How do you train a computer to understand a language (you give it a program) (* Mind Blown *). A programming language needs a computer program that translates from that language into the native language of the computer that you want to train. These programs come in three flavors: Compiler, Interpreter, and Runtime Intermediary. A compiler is a program that quite literally takes in a set of instructions in one language (source code) and outputs a set of instructions in executable machine language. An interpreter is a program that does this one instruction at a time, so it runs at the same time your program runs. One of the advantages of an interpreter is that there is usually a “CLI” or command line interpreter mode where a programmer can type individual instructions and the shell displays the result of each instruction. The disadvantage is that it does it every time the command executes, rather than once before execution, so interpreters may have performance problems. A runtime intermediary is like a big program of pre-translated and usually optimized instructions that are designed to simplify your compiler or interpreter so instead of outputting machine code, it outputs intermediary code that has been pre-translated into machine language in the runtime intermediary. Examples of this are the Java Virtual Machine or the Microsoft CLR (the .Net runtime). This can also solve performance problems in interpreted languages.

Programmers have created hundreds of languages that they use to write code. Since the first language was invented, programmers using that language became frustrated and said, “This sucks, I can do better than that.” Virtually every language came out of some programmer who uttered that phrase, and of course was capable of creating a new language.

Today, popular languages include Java, PHP, Python, Ruby, Javascript have been around for more than 10 years, and have become widely used in new development. Older languages like C, Basic, COBOL, Perl, Smalltalk or Lisp are still widely used and not to be discounted. Newer languages like Scala, Clojure, Erlang, Haskell, Go, Groovy have yet to prove that they will last. Just read the timeline of computer languages to see how many have come and stayed or gone. Just like human languages, languages are derived from each other, they inherit words or concepts from each other, and they can be divided into families. Just like human languages, when the last native speaker dies, the language dies with them. When nobody is writing new programs in a language (just fixing the old ones) the language is essentially dead.

What does it mean to learn a programming language?

A computer program, essentially, is a set of steps that follow a sequence. Those steps can do one of four things:

1) Interact with hardware (e.g. display something, write a file, get input from mouse, touchscreen or keyboard, etc.)
2) Manipulate data (e.g. calculate a value, manipulate some text, change an image, etc.)
3) Make choices that affect the result (e.g. if the mouse clicks on a button, or if the data contains the word “stop”, etc.)
4) Define an alternate sequence of steps (e.g. repeat the same steps over, follow a different set of steps, or skip one or more steps, etc.)

Languages provide words, nouns and verbs which allow the programmer to create statements, expressions or sentences which effectively become these steps. Languages provide constructs for organizing statements in different ways, these tend to be called “paradigms”, because they change the way the programmer things about instructing the computer. This is not that dissimilar from the way human languages affect the thinking of native speakers. Languages provide ways of organizing programs as parts of other programs, this can be though of as analogous the notion of sentences, paragraphs, chapters, and books in human languages.

Like human languages, programming languages have conventions and representations for “context”, so you know how to interpret things like pronouns and which noun or verb an adjective or adverb is modifying. In an English sentence, “One boy pushed another boy so that he fell down.” It is unclear which boy fell, the pusher or the one being pushed. We normally assume things from the context, but semantically either interpretation could be correct. The compiler of a programming language should “catch” such ambiguity and flag an ambiguous instruction as an error – meaning that the computer cannot reliably or precisely interpret the instruction. Likewise, compilers will flag syntactically incomplete instructions as an error, or logically inconsistent statements as errors. Humans when encountering these sentences will make a judgment, and naturally apply a correction to make the sentence precise, consistent, or complete. Compilers often do no such thing, and act more like your English teacher in high school, effectively making red marks on your program to indicate your errors. Programming language compilers are pedantic about small infractions in ways that to human language speakers can feel incredibly frustrating.

Both human languages and programming languages are just shared abstractions by which we can communicate some underlying idea that we have. The same idea can be communicated in many languages, and would sound different, and look different on a page, but would mean the same thing.

Learning a programming language is similar to learning a human language in that it requires one to learn the vocabulary, syntax (sentence structure), and semantics of the language. But to really deeply learn to write amazing and powerful programs you have to understand how the language turns your statements into the native language of the computer. Not to be able to translate into “machine code”, per se – but to understand the underlying machine architecture and how it processes instructions.

Learning to write code can be a fun exercise. It teaches problem solving skills. It increases one’s ability to think in the abstract. But it can be frustrating for the beginner because computers are literal, not contextual. They only interpret the instructions typed in, not the body language or the facial expression, or the intentions of the programmer.

What language should a new programmer learn first?

There are a lot of articles that have been posted on this, but I like Python for beginners. I like it because it runs on virtually any computer PC or Mac that you might own. I like it because it has been around for a while and is pretty widely used and is “general purpose”. That means it can be made to do most anything you want it to do. I like it because Python is interpreted and sometimes that makes it easier to get started.

Python’s syntax is simple, and not cluttered with unnecessary punctuation. It is pretty easy to learn and there are some great tutorials out there for it that are designed for the beginner. My current favorite is called “Learn Python The Hard Way” by Zed A Shaw. I won’t tell you why, just do that one first. It will take you farther, faster than any other tutorial out there, and much of what you learn will apply to other languages as well.

I suppose that one could make a case for any language to be learned as a first language. If you have some project in mind, you should start by learning a language that will make it easy to do your project. But if you are just learning for the “fun” of it – then take my advice and start with Python.

So go out and get started. Learn to write code. If it makes sense to you, go as far as you can.

Posted in Culture Tagged with: , , ,

Jedi Talking – Five Questions Reveal Approaches to Influence

Sometimes in life and work, we become convinced of a need to change before most of those around us. Either we read the tea leaves, or we see the bigger picture, or some how we just were able to jump through the problem straight to a potential solution. Maybe we have worked through all the analysis in our mind and have a detailed idea that could be a slam dunk, a quick win, or a major turn-around for the organization. The problem is simply that everyone else is stuck in the status quo. Maybe they don’t see the problem clearly yet, maybe they just are not willing give what change requires – or maybe they just see the obstacles to change as being unavoidable or worse, unforeseeable. Maybe they see the risk of the change as many times larger than the risk embedded in the problem.

You have tried telling them. You had tried to convince others that your idea is good, that it will work. You have “told them until you are blue in the face.” Somehow, you end up coming off as unhelpful. People generally get defensive when you try to tell them about the problem, you can’t even get the solution on the table.

Perhaps the issue is not that your analysis is weak, or your solution is not worthy, but only that it is not shared. How do you get others to share your perspective, and to help champion your ideas? How do you get them to understand that the status quo (which they have been working hard to build and keep going) is going to turn out to be insufficient to achieve the larger vision? How do you get them to “disinvest” themselves in the way things are, so that they can invest in a new idea? How do you get them to be open to your ideas, instead of getting defensive? Read more ›

Posted in Leadership Tagged with: , ,

Functional Architecture Principles

Functional Architecture as a discipline has been brewing for a few years now. I have been a “functional architect” for a software application, and have also been involved in functional architecture review of enterprise software programs. I won’t claim to know what functional architecture means in any universal sense, but having done this work, and been in this role, I can describe some functional architecture principles that I know are helpful to making software more valuable in an enterprise context.

Functional architecture has several aspects.  Read more ›

Posted in IT Strategy Tagged with: , , ,

Five Lessons Learned From Consulting Engagements

In a recent post about consulting engagements, I talked about some of the challenges with consulting organizations and their standard practices. I thought maybe some might benefit from some insight.  These are some specific suggestions for handling these kinds of challenges.

1) Consulting firms have “relationship” managers or “engagement” managers – these are people whose job on the project it is to ensure the customer is satisfied. It is their corporate mission to ensure that your company spends more money with them. They are sales people. They come to your project meetings, with a stated purpose of making sure that the project is smooth and successful. Their “other” purpose is to develop a deeper network in your organization, and to “discover” other opportunities for their firm to “help”. While they may have expertise, industry knowledge, and skills that help your organization, it is worthwhile to question whether they should be billable on your engagement. Read more ›

Posted in Culture, Leadership Tagged with: , , ,

Consulting Engagements

My current role is interesting. I am an internal IT consultant in a large financial corporation. As an internal consultant I am free to work on as many projects as I can juggle. My billing is only explicit when I work on capital projects. I spend more time talking than “working”. Most of my working is writing. Yes, making PowerPoint decks is considered writing.

Over the past year, most of my own internal consulting engagements have involved some coaching. Coaching leaders on the business side of our organization through projects with IT entanglement. Coaching IT leaders through adoption of new technology or practice patterns. Coaching project leads into positions of transparency and truth telling. Coaching different kinds of leaders through developing guiding principles that make all the little decisions easier. Interestingly enough, the coaching is not really what I was engaged to do. It simply flowed from my understanding of the needs of individuals in the project context to be successful.

Recently, I have been working with a number of external consultants. Teams, actually. Teams of consultants from big 5 firms. I have been attached to the same project as they have, and to them, I am a SME and a network adapter. I share my knowledge of organizational practice and my interpersonal network with them, so that they can get their deliverables accomplished.

What I often struggle with is the shallow depth of their analysis. Their engagements are short, usually in increments of 6 week intervals. They spend a lot of time collecting data but not really producing information. They have methodologies that I suppose would be effective if the data/information they were fed were appropriately scrubbed and semantically understood. Read more ›

Posted in IT Strategy Tagged with: ,

A taxonomy of software types

Generally, software falls into three classes; Apps, Tools, and Infrastructure.

The Breakdown

Apps – or applications as they were formerly known – are software built to help a user do some valuable activity, like check a bank balance or edit digital images. While the end user must learn how to use it, an app is useful without further development. Some apps are configurable, so can work differently for different users or organizations, but they are still focused on solving problems or delivering value related to some specific functional domain.

Tools – are software that are more general purpose, but with a specific flavor – that they are designed so that users of these can use them to “fashion” applications for themselves or other groups of end users. Tools express their own user experience, but are not always immediately valuable without some “fashioning”. Tools can range from Microsoft Excel to Sharepoint to web content management systems like WordPress, to giant ERP systems like SAP or Peoplesoft. Many business intelligence product fall into this category.

Infrastructure – are software that really has no end user experience. They are designed completely as a foundation for other software to be built upon. This would include any software whose primary interaction mode is through API (application programming interfaces) or CLI (command line interface) patterns. Products like databases, middleware, application servers, application frameworks all fall into this category.

So why is it so confusing to people? Because technology has its own functional domains. These classes are not mutually exclusive, in that for one user it is an application and another it is a tool. With add-ons, extensions, or plug-ins it becomes even more confusing, as these constructs blur the lines between tools and applications even further. Plug-ins for a tool, may be applications focused on one functional domain. Read more ›

Posted in IT Strategy Tagged with: , ,

Feats Instead of Processes

In my last post on Software Capabilities and Feats I said that feats are better [to model in a software capability] than processes, because processes are merely organized, consistent, managed ways to accomplish the feat.

A process is one way to accomplish a feat. The feat is the result you want.

Process is constrained by capabilities. So when I am modeling new capabilities, I should not be constrained by existing capabilities. When I introduce the new capability into the wild, the process will need to “re-form” around the new capability.

A process has steps. If we build software capabilities to support a process, we treat the steps of the process like “feats”. I can model a capability to make that step faster or more effective. That assumes that the step is necessary. In order to decide whether a process step is “necessary” I need to understand how it contributes to the valuable result. I need to understand why I do the step. Read more ›

Posted in Requirements Tagged with: , ,

Software Capabilities and Feats

In some recent conversations, I find that it is hard to explain the notion of a capability. People want to talk in terms of software features or project requirements.

Software capabilities define value in the following ways:

  • enabling the user to accomplish a “feat” in less time than they otherwise would.
  • enabling the user to accomplish a “feat” at a greater scale than they otherwise would.
  • enabling the user (or a group of users) to accomplish a “feat” more effectively than that they otherwise would.
  • enabling the user to accomplish a “feat” that he could otherwise not accomplish at all.
  • enabling the user to accomplish a “feat” better than he otherwise could.
  • enabling the user to focus on “feats” that require decisions, rather than repetitive steps.

Every other benefit of software can be composed from these.

To define value, a software capability must contemplate (model) the “feat” that the user wants to accomplish. Read more ›

Posted in Requirements Tagged with: , ,
Google+