The mere thought of migration implies its own set of software and hardware expenditures. Any IT manager contemplating a migration has already considered these costs and the projected benefits of a migration. This article from Informit's Windows Server Reference Guide
examines the often overlooked costs of re-engineering and how the prospective migrant views a migration.
The re-engineering category of expenditures accounts for redesigning the network structure, retraining administration and support personnel, retraining users whose operating procedures will be slightly or completely affected, and (no less importantly) executing the migration plan.
Re-engineering redefines a company's entire concept of its core asset, information. This redefinition is not an upgrade; moreover, it is a paradigm substitution. Long-time Windows users—especially administrators—often fail to appreciate the realities of the world these other users hail from, and the depths of their rational skepticism toward any campaign, however well-meaning, to transplant them from a realm that, seemingly just yesterday, was characterized as ideal.
Think about the following key characteristics of re-engineering from the perspective of the prospective migrant:
- In most NetWare environments, what a user has rights to is defined mainly by the password barriers restricting access to a storage device, then secondarily by the DOS-style attributes for the files stored on that device. In these environments, there is no clear statement or record of what any one particular user can or cannot do, based on who he is. So the concept of "importing profiles" fails to apply to that realm; one has no profile beyond the collective restrictions placed on her by the various storage devices and logical volumes. The issue here becomes "creating profiles." And, while the chapter for that topic in most Microsoft literature is generally shorter, that's because the topic is more about creating them than comprehending them and, more importantly, enforcing them.
- The topology of older LANs was not designed for the rapid distribution of files or the centralization of resources, but instead for the facilitation of so-called "local data" and so-called "remote data." Sometimes the data that belongs to you (your spreadsheets, your documents) was on your own PC, while the "remote" server presented you with a consistent view of the company database, and a repository for those application components which can be centralized. Even the most performance-evolved networks that sprouted from this topology—which are still in use today in countless small businesses—maintains and even reinforces the archaic notions of local and remote data. So now that Windows Server 2003 has made a DNS topology economically viable for small businesses, it is no wonder that numerous trainees for this new realm actually start imagining such subdomain distinctions as \local and \remote for compartmentalizing company data in a familiar manner.
Imagine an English-speaking person learning German for the first time, encountering a chapter in a textbook on the formation of infinitive-case verbs, without ever being told that all infinitives in German fall at the end of their sentences. We tend to consider the fundamentals of the realm we live and work in every day as though they were historically grounded, forgetting that any transition to a new set of fundamentals requires not only retraining but un-training, which generally takes longer.
- The centralization of resources without respect to distance between storage devices or processors, made possible by the Internet-derived Domain Name System (DNS) adopted by Active Directory, enables a more function-oriented, less geographical approach to designating the roles of various hosts. One Microsoft server specialist calls this transition migration-by-role. The term has not caught on, but it should. It implies the possibility of a smooth absorption of authority from the various affiliates in the older network topology, to a domain of one or more processors that logically represents each function as a whole for the entire company. Only phased migration is a foreseeable strategy for such a process, where the company can focus on one absorption at a time. But even the smoothest technological strategies fail to take into account the subsequent delegation of personal authority over information resources that the older topology had spawned. Administration of applications, database facilities, and user policies has also been geographically distributed in companies that had adapted to the older technological structure. In such migrations, the job of re-engineering often crosses over from IT into Personnel.
- Most Chief Information Officers view their business' most valuable asset to be their core business logic. To this day, international banking institutions retain their investment in decades-old mainframe "big iron" because, regardless of how many new flavors of Visual Studio .NET Microsoft invents every year, the next generation of that logic has yet to be created. Technologically speaking, it's feasible for a three-tier network to make mainframe logic available through middle-tier "blade" servers, and Windows Server 2003 is a leading candidate to run those servers. The first networked applications for LANs, although newer than "big iron" tasks, were based on mainframe processes. So in an era when the role of a PC's operating system was to provide the command line and let you occasionally run CHKDSK, the business application also managed the user. If there was such a thing as policy management, it took place on the application level. As a result, businesses that still have no choice but to retain their investment in older logic, end up replicating functionality such as user management (logins, permissions, time-stamps) even after the migration is completed. The changing role of the operating system fails to account for the obstinate role of older applications.
Read more migration advice at Informit's Windows Server Reference Guide.
This was first published in August 2005