What is Data Architecture?
Data or information architecture is the 2nd phase of activities in most Enterprise Architecture frameworks and the basis of many technical kinds of activities that happens in the EA lifecycle. This post today will talk about the primary set of activities and tasks that must be undertaken to provide a world-class EA plan for a solution, strategy or plan. Data or information architecture captures where your data assets are, what you or your systems are going to do with them, how they will be secured, integrated with, governed, classified, reported on, what their lifespan will be as several other factors. This set of deterministic activities happens only AFTER the business architecture is clearly defined. Data architecture should be an established practice in a large organization and at a minimum a standards guided set of activities in a smaller organization.
Why should I care?
Robust, intelligent and planful data architecture is the key to a stable, secure and value driven solution in today's enterprise. Think back to all of the systems, solutions and products you have worked on that didn't quite have all of the data that was needed. Remember those systems where you would be midway through a process and would have to stop and import a bunch of .csv files or data from somewhere lese? Remember how challenging it was to integrate between systems? Think of the myriad of issues surround data governance. Who own which data elements and governs who can see them, use them and modify them? Who really owns the security and risk mitigation of the data elements? How can the information and data in your enterprise be collated with other industry data on a large scale? Never mind all of the hype and misnomers in the industry right now around big data and data science, data architecture plans for those disciplines and sets them up to add value with minimalist barriers to entry. In the modern enterprise application ecosystem more and more enterprises are dealing with hybrid-cloud environments where all of your data assets are not on premise. How do you control and appropriately secure and report on them?
What are the main components of a data/information architecture plan?
Data Governance - Data / Information governance is the practice of understanding, classifying and enforcing who can see which data elements and logging their usage and understanding who is the steward of the data. If a proper data governance practice is in place, business stakeholders feel "in control" of their data assets and empowered to use them for competitive advantage and have confidence in their value and meaning. It is critical that all data assets have clear ownership and a clear process for allowing other systems, people, processes and reporting access to these portions of information. Lack of data governance typically results in data integrity, validity, quality and other problems. Even a simple process is better than no process.
Integration - A plan describing which data elements can be integrated with and what means they will be consumed and published. A robust integration strategy defines how and where integration will be made available. In a "cloud to ground" and "ground to cloud" data integration scenario, the old integration tools, methodologies and established practices may not apply. This is perhaps one of the most critical elements of a data architecture. Publicly available APIs that allow "the public" to integrate with your digital assets are becoming more and more important for modern companies. Enabling 3rd party vendors, suppliers and eventually other businesses and aggregators to leverage your assets and products. Integration is typically where the majority of pain occurs over the lifespan of a system or data asset. Make sure you have clear, concise plan and architecture that securely enables easy integration.
Data Models - The artifacts that describe where and how your data is related, sourced and how it is stored across schemas, integrations and endpoints. Some detailed data models describe the canonical relationships between data elements. For example, which data elements comprise an address? OF course the easy response is Street, city, state, zip code, etc. Well what happens when you have a customer or supplier address in a country that uses only PO boxes and does not have a zip or local code? Your data model must accommodate this or at least not preclude you from easily accommodating this scenario. A data model must also have a process for keeping it up-to-date. One of the worst practices seen in data architecture is not keeping these artifacts versioned and updated. Some scheme fro regularly reviewing and or keeping these artifacts fresh as part of your change and or release processes.
Data Quality – What is your plan and methodology for ensuring that your data is both accurate and maintained in perpetuity to be accurate and validated. What are your points of data validation? Is your data validated as it is entered, whether by an end user, employee or customer? Are there regular checks in where and how you validate your data? What are your best practices for data quality and do you need any tools to ensure your data quality? How mature is your practice? Are you scoring the quality and accuracy of your data? Ensuring that your data / information is accurate, in context and free of anomalous data problems is critical to the integrity of reporting, analytics and essentially every transaction. In my experience, I have seen executives lose trust in reporting because of 'glitches' or other problems with the reporting to the point where they missed huge opportunities because of their lack of confidence in the reporting. Data quality is a must.
Data Retention – How long does your data stay with your organization? Some kinds of data are subject to data lifespan requirements and must be deleted after a certain period of time or portions of the data removed or permanently obfuscated. Data elements and data sets that need to be redacted, deleted or scrubbed after certain periods of time must be called out, defined and a plan to ensure the needed elements are dealt with presented.
Data Security & Compliance - This may be the most important and scrutinized section of your data architecture. How will your data assets, integration assets and reporting assets be secured at rest, in transit and in memory. Much of the data architecture security depends on platforms and solutions that involve other phases of enterprise architecture [business, solution and technical]. Your data architecture should include your plan and strategy for encryption, key management and other mechanism that enable securing your data and integration assets. This component of data architecture deserves its own detailed section.
Data Warehousing - This is the plan for correlating and storing data from various systems into one central warehouse or cube for reporting and analytics purposes. This strategy will describe the tools, processes, patterns and methodologies used to make sure transactional data and information makes it into an analytical data store where it can be analyzed and refined for seamless reporting.
Reporting - Your data architecture must describe what tools, methodologies, and processes will be used to help report on the business activities, outcomes and Key performance indicators that are relevant to helping the business.
Metadata - In the simplest of terms metadata is "data about data". In other words the descriptive elements about your data or "more context" about your data. An example might be a image as your data and metadata would be the subject of the image, the date it was taken or maybe where it was taken and who has the copyright on the image. Your data architecture should describe how you plan to capture, store and correlate metadata.
Data Dictionaries - Data dictionaries are closely related to metadata, data warehousing and reporting. Data dictionaries hold the metadata and semantic information about the different data elements, reporting and analytic capabilities and the methods used to manipulate the data and information. Your data architecture plan must account for a data dictionary capability.
Data Platforms - This portion of your data architecture plan should describe the requirements of your data platforms. Not the actual platforms, technologies, tools or vendors of these platforms. This portion should also describe an inventory of existing data platforms and their capabilities as well as any business drivers that may necessitate the need for additional or different platform capabilities.
by Erik Peterson
A friend and I were prepping to advise the financial execs of a large, international company on cloud adoption. He turned to me and said “Erik, I think I’ll tell them cloud adoption is a CFO decision”. I replied with something like, “Absolutely. Of course such important investments like ‘the cloud’ are a CFO decision”.
Well, the statement makes a good point although a bit idealistic. A more complete statement would be something like “a best practice is that cloud adoption strategy and execution should include CFO involvement, approval, controls, and transparency…” Not as pithy, but let’s talk about the relationship between cloud adoption and the office of the CFO and my deliberate usage of the word “should”.
Back to idealism for a minute…
Consider a perfect world where a company has clear business strategy and objectives. The organization has mature operational governance and financial systems that reasonably align technology investment and outcomes with the business. The current state of the information technology footprint is reasonably up to date and uses best practices. It’s not bleeding edge but the “house is in order”. The company is doing well and now along comes a massive information technology shift from a cottage industry approach to a utility approach. No worries, this company is positioned to take advantage.
But companies aren’t perfect and decision makers have to navigate new waters knowing the rudder is a bit cocked, the crew has some issues, there are a few leaks being fixed in the hull, and “if we just had more wind in the sails”. So how does an imperfect executive team comprehend when, where, and how it should move to the cloud?
A utility metaphor…
Think about a century or two ago when a home owner had to implement their own method of obtaining water, disposing of sewage, and providing lighting. Today, unless you’re building a remote cabin in the woods, you’re hooking up to utilities.
What business wants to be a remote cabin in the woods? An IT trend has been taking place where businesses not hooked to the utilities (the cloud) look more and more isolated like that remote cabin. They just can’t get connected fast enough. They can’t connect to their own systems let alone partners, providers, customers, etc. Ugh! Executive ideas, ambitions, and objectives are held hostage by IT.
But not so fast…
Sometimes the wise move is to not be the early cloud-this-or-that-adopter. Imagine hooking up to in immature cloud provider that get’s hacked or that provides only a very expensive exit option when it isn’t providing the expected value. (Note that you should consider calculating exit costs as part of the total EV...who never leaves a tech platform?)
So what’s the relationship of CFO’s to cloud adoption? Get educated. Get involved. Don’t strangle it’s adoption. Don’t let it run wild either.
Note that SaaS cloud adoption is your biggest shadow IT nightmare…right now…and you don’t have it adequately managed. Prove me wrong! As CFO, you need to be involved managing it. It’s too easy for a well-intentioned but naive staff with a credit card and favorite credentials to start getting work done with cloud SaaS subscription. But before long you have hundreds of duplicate providers, massive security exposure, have lost alignment, and a real mess for you (or the next CFO) to clean up.
How does cloud shift your CAPEX/OPEX for existing and new offerings? How does “on-demand” impact your sales? Your production and delivery? Good CFO questions and I have plenty more you may want to consider.
What it comes down to is that a CFO “should” provide leadership from their view by knowing the tools, opportunities, the risks, mitigation, governance, trends, etc. that help the company make sound cloud investments and manage them well. It’s a great opportunity to contribute to your executive team.
What is the “systems flexibility curve”? As an enterprise architect, I am frequently asked to help business leaders decide on which solutions and approaches to use to fulfill a business need. These decisions are often the inflection point [investment wise] of millions of dollars over many years and in some cases can be of a very high level of criticality to the business. The “systems flexibility curve” is a visual representation of the trade-offs uncovered when determining the best way to fulfill a business need with a solution. The chart represents several competing interests. When we look at the various factors involved with using technology to fill a business need there are competing factors. This graph was created out of a need to help mostly non-technical people [business leaders] understand some of the factors going into a build vs. buy and or new solution discussion. I will try and define them for the reader here:
At the far left of the graph we see a system that is custom-built to the exact specification of a business process where the customer [business partner or client] has complete flexibility. A solution in this space requires significant long-term investment in maintaining the custom built solution and thus the high labor costs and associated risks. The final attribute of a solution to the far left is the slow speed of delivering the solution.
A solution at the far right of the graph is one that was purchased and deployed out-of-the-box with little configuration. Thus we see the automation is high with regard to the business process. The high systems cost is representing the purchase of the software or solution and the high one-time, upfront costs. The speed of implementation is very high.
The truth is that most solutions that get deployed into production are typically somewhere or on some scale in the middle, it is pretty unlikely that a solution would be on one of the extreme ends of the spectrum.
In conclusion, this graph or visual representation of the systems flexibility spectrum has been very helpful to me over the years and I encourage others to use it as well. There are assumptions made with the graph and these rules are mostly true. There are some outliers that make these assumptions untrue. There are ways to get the worst of both ends of the spectrum. The most common is the typical ERP trap where an organization purchases a software package and customizes it to the point where it is no longer maintainable by the vendor. This is literally the worst of both worlds where you pay for the customization and the high maintenance costs. This is broadly considered to be an anti-pattern in the industry. Enjoy!
by Erik Peterson
IoT is all the buzz and for good reason as man and machine continue get along better. Our world is changing.
Ardec has been and continues to be a player in IoT since 2001 with patented tech that I and several of my brilliant talent were co-inventors on. Yeah, 15 years ago. Anyway, lets simplify what you need for IoT and I'll use some examples from that patent.
Some background...the business idea was that the Ardec client wanted to sell age-verified items from a vending machine. Simple right. Well, we rocked the vendor conference in Chicago when we unveiled the system. It landed us in national news that syndicated across the US. I hope your IoT rocks too!
So how did Ardec do it?
IoT has three basic components: 1) smart electronics, 2) device packaging, 3) get connected.
1) Smart Electronics
Below is the diagram of the smart electronics complete with special sensors, logic, communications, etc.
2) Device Packaging
Smart electronics need form, a way to be installed, provisioned, updated, etc. Most of all its usually the packaging that provides the connectivity from the electronics to the physical world. Below is the packaging ("device") in our system.
3) Get Connected
In Ardec's case we could have built the device without any connectivity. But I had this craving to just remote control the thing rather than send a technician on site every time the thing needed to be kicked.
We constructed multiple ways the system could connect over wired and wireless protocols to a software communications system with web API's. The comm server then connected to a business server where most of the logic was held. A web UI then gave the business a way to control or delegate control of 10s of thousands of devices.
Ha ha, you know as I look at these drawings, I am sooooo glad for better drawing tools like LucidChart!
Your device needs smart electronics, packaging, and connectivity. Stop thinking about it and build it. Change the world! Ardec has a way of simplifying otherwise complicated systems. If you're into IoT let's chat about what you're doing. It's really is exciting to see, not the beginning of IoT, but its explosion.
by Alan Rencher
I am often asked which Enterprise Architecture framework I like and which one I recommend. This is usually a very quick discussion where I try and explain that there are great reasons to use a few different ones but I have my own biases and opinions. I try and encourage the serious answer seeker to decide for him or herself and will recite a platitude about there being a few good ones and you should study them and decide for yourself. In the end the person typically just says something like: “Just tell me which one I should use.” This kind of bothers me, but if I am nailed down I usually answer: TOGAF. I want to put some color in the conversation and in this entry I will try and describe the top five frameworks, my take on their strengths and weaknesses and why I like the ones I like.
A very brief discussion on the practice of Enterprise Architecture may be in order. In the late 1980’s a guy working for IBM named John Zachman coined the term “Enterprise Architecture”, he did this in a seminal paper entitled “A framework for information system architecture“. In this paper he outlined potential steps and practices for making good architecture decisions with regard to many different and varying factors. Since then many other smart people have implemented similar frameworks and processes.
In alphabetical order, here are the top 5 that I have experience with and think are sound. I will attempt to provide some strengths and weaknesses for each as well as some brief experiences I have had with them.
DODAF – This is the Department of Defense’s [DOD] framework created initially in the 1990’s as a “more secure” version of some loose standards that different branches of the armed forces had at that time. This framework is rather large and complex. It is suited for extremely large systems that have very challenging integration needs and extreme performance and security needs. This framework is too big for one person, or even a team to become expert on. I was required to use nomenclature and diagrams using it’s standards when I was helping run a defense contracting technology company that was building and integrating supply chain systems for the USAF and later the DOD as a whole. It was actually very different than Zachman and was very challenging to understand. One of the strengths of the DODAF framework was it’s focus on what it calls “Operation Views” and modeling and representing them from different perspectives. I would not recommend this framework to anyone or any organization unless you are working for a DOD related entity or a defense contractor.
GARTNER – Gartner offers an Enterprise Architecture process and practice with some elements of a framework. For our discussion today we will refer to their offering loosely as a practice. This practice focuses on what Gartner refers to as “business outcomes”. The Gartner practice is heavily weighted toward defining a “future state” and everything working toward that outcome. In principle, I like this but their practice seems rather lean in giving you tools and emphasis on the “how” you get from your present state to your future state. I had a lively discussion with Betsy Burton and Brian Burke at a Gartner symposium last year on this topic. The Gartner practice is very business friendly and is seeing some real momentum in the industry right now but is still not widely adopted. I am not entirely sure why but suspect that in the coming 18 months this practice will start to really get wider adoption. We’ll see. I really have not used their practice exclusively so my experience with it is all speculative and academic.
TEAF – This was the “Treasury Enterprise Architecture Framework”. This is a dead framework that I used when building systems for a major financial services company earlier in my career. It was a derivative of the Zachman framework that specialized in tighter security controls and was initially sponsored by the US Department of Treasury. I used this back when it was a brand new standard and in all candor was terrified of it. It was heavy and scary. It had a rigid list of vocabulary that differed slightly from Zachman and really focused on a series of matrix views that enabled “intrinsic perspective”. Apparently this framework has been consumed by something even scarier, the dreaded “Federal enterprise architecture” or FEA framework. All jokes aside, I am not super familiar with FEA other than the hallway conversations at EA conferences. Some former colleagues in the federal contracting industry have described it as exceptionally heavy and almost as overwhelming as DODAF. Time will tell. Perhaps this framework is responsible for the Obamacare health exchange disaster? Just kidding….
TOGAF – This is “The Open Group Architecture Framework“. This framework is created and maintained by the OpenGroup. This open standards based group is comprised primarily of industry thought leaders and seasoned EA professionals. This group is influenced at least somewhat by their corporate sponsors. These sponsors are very numerous but the big mega-vendors comprise the “platinum” sponsors. My own opinion is that these sponsorships are very healthy and represent an extremely wide diversity of corporate interests. TOGAF splits all architecture activities into four domains or categories: Business Architecture, Data Architecture, Solution / Application Architectureand Technical Architecture. These categories are worked on in various phases that happen inside of what TOGAF describes as an Architecture Development Method. This is a series of steps and or activities which help the architect arrive at a usable architecture to help the business reach it’s goals. Some purists and others dislike TOGAF because you have to pay to become a member of the OpenGroup and you need to buy the materials and pay for the certification tests. I have purchased the materials and I have taken the certification tests. I am not a TOGAF purist but I like the tools in the TOGAF toolbox and I use them when they make sense. I have delivered value with the TOGAF toolbox. I personally get a little overwhelmed even thinking about strictly following the TOGAF ADM to the letter. It is my own personal preference. I do have a mild criticism of TOGAF myself in that it does not provide a super-heavy security emphasis. This can be overcome by discipline and your own focus, or just including security requirements as part of your initial business architecture.
Zachman – This is the Stradivarius of Enterprise Architecture frameworks. I have used it extensively early in my career and cut my architecture teeth with it designing home automation and engineering process software. It uses a very extensive matrix of steps of design. I liked it because it was easy for me to understand. It used simple descriptions like: who, what when, why, where and how to describe different layers of maturity in your overall design. John Zachman was a visionary man. He was a game-changer. His framework is still somewhat popular and many organizations still use it. Nearly all EA frameworks trace their origins to Zachman. The challenge with Zachman is that it has not adapted to the levels of complexity in the industry even though it has gone through several revisions. That is my opinion. I have had long discussions with many who I respect who would disagree. In my mind the simplicity of the more recent additions have made the Zachman framework kind of like a nice classic car that is fun to get out and drive once in awhile for fun but not something you want to drive to work on the freeway every day.
All of these frameworks add value or have added value. There a few others that I have not mentioned that some practitioners will undoubtedly call for the leaders spot. At the end of the day, in my present practice, I draw on all of my varied technology and leadership experiences that have their roots in all of the frameworks I have been exposed to. I draw most heavily from TOGAF and at this point in my career use TOGAF by far the most. I guess I am a TOGAF guy for the most part. I would love to hear which frameworks have added the most value for others. In closing, I feel that there is actually a pretty big hole in the EA space. I believe that there is ample opportunity to provide a much more simple framework that would probably borrow heavily from a few of these more well-known frameworks to create a simple, programmatized framework. I wish I had a lot of free-time to go do it myself. Who knows…….
by Alan Rencher
I remember as a young man thinking about all of the amazing possibilities that would be around when I was an adult. I remember the prognostications about flying cars, self-driving cars and of course the "Mr. Fusion" portable nuclear reactor from Back to the Future. At that time in my life I really believed that anything was possible. Looking back on the 1980s and some of the predictions and futuristic thinking taking place at that time, some of our expectations for the future were a little strange. Some were right where they should have been. Remember the kids cartoon Inspector Gadget? For the younger readers, let me give you a brief synopsis. He was a hapless crimefighter who had robotic arms, gadgets and other things that he could do. He would say things like "Go, Go Gadget hair dryer" and a robotic arm wit ha hair dryer would come out of his hat and dry off his hands. His car would "transform" from a police van to a submarine and things like that. His niece "Penny" and her dog "Brain" were typically the figures behind the scenes making things happen and solving the actual crime or mystery. Inspector Gadget 's nemesis was a shadowy figured called "Dr. Claw" the leader of a sinister criminal syndicate called "MAD". In this cartoon, "Penny" and her dog had all kinds of really cool gadgets they would use to fight crime. Penny and her dog had "video watches" and headsets that allowed them to talk without any wires and of course, who could forget Penny's "computer book". This book could solve difficult problems, compute complex physics questions and could control all manner of other electronic devices and other things. These devices seemed almost whimsical and unrealistic to many at the time of the cartoon in the early 1980s.
All of these devices exist today and really are not that surprising to anyone, in fact, many wonder why they took us so long to bring to market. The level of innovation is accelerating. Companies are bringing technology to market faster and connectivity and processing power is starting to appear in everyday items that are becoming more and more mundane. The recent IoT or "internet of things" buzzwords are becoming more and more part of our everyday life. Sensors and data creation points are creeping into more and more aspects of our personal and professional lives. At this point, nearly anything is possible with technology, the science fiction ideas of yesteryear are science and technology reality today. Are you and your business ready? Do you understand and comprehend how you and your company need to take advantage of these digital assets and technologies to further market to and engage your customers and users? Do you have a plan to do this?
by Erik Peterson
I'll kick off our blog with some discussion about an Ardec favorite moto I coined some ten years ago, "Wise investments, well managed". This has been a strong agenda of mine for my clients. Too many decision makers, and some who started with deep pockets, seem to just burn through money on this and that technology. Well yes, like me driving a green at 300 yards can and has happened, throwing money at technology in a "swing for the fences and just maybe" mentality just doesn't cut it.
Additionally, how many of us would spend the money to put in a beautiful lawn and then let it go un-watered, un-fertilized, un-cut? Was the grass bad or the care? That unfortunately happens with technology spend too. I've seen the fallout from management spending big bucks on some great tech, then walking away like they think it will magically take care of itself. How about this scenario, ever seen some forget that no matter how awesome the tech, it takes talent to operate...and usually other investment too. Let's stop buying Ferrari's without steering wheels, and parking them on the lawn and using them for flower pots. Ha ha, too fun poking at some of the poor patterns I've seen!
The point is that there are sound principles for technology investment and management. Let's learn them. Let's apply them. I hope this blog and Ardec's services can improve the economic waste from bad, poorly managed investments.
Buy well, drive well.
by Alan Rencher
First off, what is a customization and what is a configuration? A customization is a feature or extension or modification of a software feature that requires custom coding and or some form of implementation. A configuration is where you use native tools in the system to change it’s behavior or features. Wait you may be asking, those sound very, very similar. They are similar. The key differentiation between a customization and a configuration is this: Does the work done to enhance the feature or extend it’s capability roll with an upgrade? in other words, when the software vendor releases the next version of the system, does the work you did require work to be done? If you answer yes, you have a customization! Congratulations!
There has been a shift in the industry in the last few years. A far-reaching and important shift in how we fundamentally think about software, specifically purchased software. Software that is purchased tends to be tailored for a very specific purpose. Think of low-end consumer purchased software, things like Microsoft office products or Apple’s iTunes. These software suite’s are purchased to fit a very specific need. Would you ever consider customizing them? That is, opening up or de-compiling their binary files and inserting your own logic or your own features? Even if this were something that would be relatively easy to do, nobody in their right mind would try. AS soon as Microsoft or Apple released a newer version, what would happen to all of your customizations? That’s correct, they would all break. This is a simplistic example but a valuable one. Let’s examine some similar examples in the automotive space. When you purchase a new car, all manufacturers give you options to configure your new car in many ways. You can have different kinds of wheels, a more powerful engine, standard versus manual transmissions, paint job, leather seats, etc. You would never consider purchasing a new car and then adding a new kind of engine that you built yourself. Or maybe you would? The photographs above show a standard configured vehicle and a “customized” vehicle. Perhaps you may want to customize your vehicle. Once you start those customization, you invalidate your warranty. Surely you could find technicians and others to help you maintain and enhance your car customizations, it wouldn’t be cheap and it wouldn’t be very fast either and if you were to try and find a new technician to help your customized vehicle, it would take some time for these technicians to “get up to speed” on your customized vehicle. The overall TCO [total cost of ownership] for the customized car would be far greater than the stock or “configured” car. Another consideration for these vehicles would be the need for regulatory compliance. When you purchase a configured vehicle, it comes with the guarantee that it is compliant with all governing bodies. Some examples may include crash safety, EPA emissions and others. Can you imagine the cost and time it would take to get these regulatory compliance certifications on your own custom car?
Before you email or message me and complain that the automotive market is different than the software market, let me preemptively say that they are close enough. I cannot count the number of times I have seen companies and other organizations purchase software and then heavily customize it and then a year or two later openly question why they allowed that to happen? In the ERP software space this is a rampant problem that has actually put many organization out of business. Having planned, participated in, executed on several ERP roll-outs and upgrades, I can tell you that customizing software is a bad idea 99.9999% of the time. In other words, it is always a bad idea. Your organization is better off writing the software yourself rather than purchasing it and customizing it.
The next question you are asking is pretty logical: When and how do I help my organization NOT customize? Enterprise Architecture typically follows a solution pattern like this that helps someone planning or implementing a solution to ask these questions in this order:
Follow the chart above. If it is too small, click on it in a new tab. Notice that the process starts with the “need for a solution” and quickly turns into a requirements and business analysis activity. Once this is completed, the very first thing that should happen is the discussion around whether or not a purchased solution can be found. It is unlikely that an exact 100% requirements match will be found. I have seen this happen once or twice in the hundreds of solutions I have architected and worked on but it is quite rare. In most cases, solutions you purchase and configure will cover 75%-80% of your requirements. When you have this kind of requirements coverage you are on to a good product. If there is not a good solution you can purchase, you still have several options. This is typically what the industry refers to a “product development” or solution implementation. You have many options here: Full-blown software development where you start from scratch and build something ion a 3GL programming language. This is probably a measure of last resort. In this scenario you are doing something that differentiates your business or organization somehow. I will save that topic for another post. If the requirements coverage is low [below 75% or so], you may consider a composite application strategy. What’s that you may be asking? A composite application is a an application that is really just a custom interface that uses a purchased or packaged system underneath. The idea behind a composite application is typically to provide a much more simple user interface or to surface the functionality of the underlying system in a channel or way not intended by the original underlying system. Perhaps Composite Application Strategy deserves it’s own post. But that is the simple explanation for it. If a composite application strategy cannot be achieved, you may then start to talk about customizing your purchased system. When you do customize, make sure you understand that you are making some very, very expensive decisions and you will be paying for these customization every time you upgrade or try and patch you system. Customizations are a measure of last resort, PERIOD!