programming

“Inexpensive” 3D

A few weeks ago, I ended up in Orlando somewhat by chance. Since it had been a few years since I had done it, I took the opportunity to check out a few of the newer exhibits. One that struck me in particular was Mickey’s PhilharMagic (Disney, wikipedia) which is staged somewhat in the middle of the Magic Kingdom. The theater itself is somewhat designed like the ficitional theater in the 1993 John Goodman film Matinee (imdb, wikipedia). In the film, Goodman’s character, Lawrence Woolsey, introduces what he calls Atomo-vision and Rumble-rama. These innovations bring more senses into the movie watching experience like touch through things like vibrating seats just at the scary moment. In the current Disney version, they use lots of gimics like spraying water, various scents, smoke, and bursts of air to enhance the experience.

One of the additional features is the use of 3D with more modern glasses that almost look like cheap sunglasses. Of course, there are all of the standard 3D gags like pies flying at your head, trombone slides popping off the screen, and gems floating in the air that appear easy to reach out and take for yourself. We’ve seen all of that done before. What I found interesting was the other applications of 3D like flying through the clouds with Donald Duck, swimming under the sea with the Little Mermaid, and riding the magic carpet through narrow streets and buildings with Aladdin. The 3D models of those environments in combination with the use of the 3D glasses made it feel like we were actually flying through those environments.

So here’s my question, couldn’t we do that same thing with computer screens with video game technology? It shouldn’t be that hard for the “cameras” in video game engines to split and display the image to work with a set of inexpensive 3D glasses. Rather than spending all of the money to create heavy and expensive head gear, couldn’t this be a simpler, less expensive, and faster solution? Sure, maybe the image resolution won’t be as high, but it was more than enough to create the illusion. Can some of my engineer readers fill me in on this?


Interviewing by Doing

Most people in the industry have come to the conclusion that “learning by doing” is not only the best approach, but possibly the only approach for deep, sustained learning. Now the question is what other things are best done by doing.

One of the other largely broken processes is interviewing external candidates. Here’s the basic process as it often exists:

  1. Company posts a job on their website, job board, and various newspapers.
  2. A potential candidate finds the job, and sends in a resume.
  3. The company scans the resumes using either an automated key word or manually reading often hundreds of resumes.
  4. Phone interviews are conducted by an HR person.
  5. In person interviews with the hiring manager, higher ups, and occasionally peers and subordinates.
  6. An offer is made and accepted.
  7. The new hire shows up for work.

This process is filled with difficulties and limitations. The job description is often inaccurate or unclear to external participants containing lots of internal jargon. The candidates resume is rarely a good representation of their capabilities. Interview are filled with self-reported capabilities and results. And, most importantly, to the candidates capabilities aren’t truly evaluated until they are on the job and sometimes it takes several weeks before it becomes evident.

Certainly a lot of companies are innovating in this space and trying to improve the process both for the company and the candidate. Mirroring “testing out” in learning, some companies have implemented various testing (skill and fit) at the beginning of the process to help determine capabilities early on with varying degrees of success. Often those evaluations fall down in the same place that the pre-testing in learning do, they request self-reported evaluations of knowledge and have no focus on capabilities and few ties to actual performance. Many of the consulting companies use, at least to some extent, case-based interviewing which usually starts with a story or situation and then asks “What would you do?” That approach certainly gives some insight into a person’s thought process and problem solving, but it often leads to text book answers which reveals little about participants actual capabilities.

What if we took the entire interview process and turned it nearly upside down? Well, one local Ann Arbor company, Menlo Innovations, has taken their well-integrated philosophy of learning by doing and translated it into the interview process. I’ve already written about them a couple times (Double Your Costs to Save Money and Be Your Own SME). They clearly take their core philosophies and run them through out the business. Here’s the alternative approach that they use for interviewing:

  1. Candidates learn about the company (and the company learns about the candidates) through a variety of meet-and-greets, receptions, and free classes for the community resulting in a large pool of potential candidates.
  2. Selected candidates from the pool are invited to attend an evening Q&A session where the senior executives talk about the company, demonstrate their approach, and, obviously, answer any questions candidates might have.
  3. The candidates are then brought in for a 3-hour “interview” where they are paired with other candidates in their job category in series of 3 rounds. Each round is observed by a different employee. During the round, the candidates are given a real-world task to achieve. Programmers are asked to estimate a task. Project Managers are asked to schedule or adjust a project. Interestingly, the objective of the teams of two are not to look good individually, but to make their partner look good regardless of their partners capabilities. Given the structure of the organization (wholly focused on agile programming), this round is designed to determine an individual’s capacity for teamwork.
  4. Those that have made it past earlier rounds are brought back for the next round, which is the candidate’s first day on the job. The interview? Do the work. The candidate is put on a real project for a real client with real team members. It’s so real, that the State of Michigan requires that the candidates are paid for the time worked.
  5. The final round is a 3-week trial, again the work and the pay are real.

Notice that nowhere in the process were typical interview questions asked. No self-reporting. In fact, the only Q&A is from the candidates, not the company. The evaluation process is observation.

The process certainly has limitations, as they all do. Not all candidates can wait for an opportunity through the pool process, and even fewer can do a 3-week trial. Also, likely a higher-than-normal set candidates self-select out early in the process after the Q&A. However, given it’s limitations it still has a lot of great things going for it.

It strikes me how close this is to the philosophy of learning by doing. It makes me wonder where else we could be applying these concepts.


Theory or Practice, which comes first?

I’ve been thinking a lot recently about theory vs. practice. In both corporate and academic education, courses almost always start with the theory, then if there’s time left at the end practical applications are squeezed in and rarely is time left for practice. Forget about the likelihood that people will apply what they just heard to their job or to their life, people forget what they’ve heard before they walk out the door. Why is that? Well, I’m sure there’s a lot of great theories on that.

Everyday, I’m becoming more and more of the opinion that for most needs, practice (trying practical examples in a safe, but realistic environment) is the only way to learn any content. Theory is easy to forget because there is no reference, nothing to hook that information on to. By starting with practice and, if appropriate, gradually generalizing, learners can reach greater levels of understanding including exception handling and alternative approaches by having existing experiences through which they can understand the theory.

In some ways, this is the same problem programmers often run into. They start by programming for the exceptions, occasional uses, or power users. In fact, the power users (in learning often called SMEs) are often the wrong audience. They handle exceptions and special cases more frequently than others which makes those activities seem more prominent than they really are. In programming, coding for the exception leads to interfaces like Word or Excel that are far too complex for the average user. As users progress, they may certainly need some of those functions, but by then they will have a framework and experience with which to hang the more complex theories and approaches.

This week I’ve been back diving deep into Excel. It’s an awesome program. Every time I have a problem to solve, there’s a new feature to be discovered that can solve it. This week it was the Address() and Indirect() functions saving me tons of time. I knew they existed. I even knew somewhat what they did. They could have saved me tons of time in the past, but I wasn’t ready for them yet. I’ve taken (and frankly taught) my fair share of Advanced Excel classes. Frankly, I had to smash my old Lotus 1-2-3 framework to truly understand these functions. If these had come up in class, I would have promptly forgot them. I wouldn’t have understood how they truly worked and, worse, I wouldn’t have had a need for them anyway…a prime case for forgetting. Now, however, I was ready and had a need…no worries about forgetting them now.

Excel is a powerful tool, but it may not be the best place to start for many learners. Certainly, I wouldn’t start with the theory behind the Address() and Indirect() functions, even though I now know how often they could be used. Frankly, in Excel classes that I teach, I don’t like starting with how to use Excel. I prefer to teach the class starting with a goal in mind. What is it the learner wants to do? Which parts of Excel can help them do that in the simplest way possible. Later, we can build on that when the next need arises.

Yes, I’ve used Excel as the example here, but I think this applies equally as well to any other topic. In calculus, I still don’t understand the theories behind derivatives or integrals largely because we started with them and I had nothing to connect it to. If we had started with dozens of practical applications (not math problems, applications) and then abstracted, I might remember (and use) something. The same was true in statistics. I wish we would have started with statistics as an approach for analysis and determining how confidently that analysis can be used. Then all of those P-tests, T-tests, and Chi-squared tests might have some chance of getting used (correctly, at that).

The problem doesn’t just rest with traditional courses (math, programming, etc). It’s just as true for Customer Service training (dealing with difficult customers, running the cash register, up-selling, etc). Don’t start with the theory, give me the basics…the one or two things that I’m going to do most often, and let me practice it right away. Give me coaching. Give me feedback. Let me try, fail, and grow. Then build on that knowledge…one step at a time.

So here are my take aways:

  • The current practice of theory then (maybe) practice is backwards. Practice then theory seems to be a better approach.
  • Start with what I need to actually DO. I’ll learn what I need to KNOW from there.
  • Only when I’ve had some practice and experience am I ready to bash my old frameworks as I build new ones.

(Less Search) More Find

I’m a huge Google fan. They’ve done some amazing things with search, more or less all of the other products. I use Google search easily dozens and dozens of times a day. However, it has a bit of a downfall. Searchers need to know a pretty decent amount about there subject to get the best results. For example, Google can’t answer the name of that girl I went to high school with. Once I have her name and maybe some recent location or company information, there’s almost always a ton of stuff that can be found out about people (scary, but true).

Finally, a piece of software comes along that solves a similar problem that has plagued humanity (probably) since the beginning of time…what is the name of that song that keeps endlessly looping in my head? Who sang it? Well, now there’s a search engine that lets me do just that. Grab the computer microphone, sing a tune (even humming it works just as well), and it will come up a list of matching songs, who sang it, and the opportunity (of course) to buy the song. Check it out at Midomi!

It doesn’t matter how good the searcher sings or even hums. Even some of the words can be off a bit. If it doesn’t find the song, the recording can be used to teach Midomi which results it should have found. Pretty cool. I would love to know about how they do it.

If only there were a way to find all of the other random things floating around in my head…


Password Pains

How many times have I gone to a website and forgot my username or password? Actually, I didn’t forget the password, I forgot which ones I used. Was it one of the ones with less than 6 characters, 8 characters, or more? Did it include numbers? What about case-sensitivity (mixed upper and lower case)? I use many passwords for each category, but which category is it? Each web site follows it’s own rules for password strength.

The stronger the password, the less likely a stranger (or hacker) could guess or discover the password. That’s why using birthdays, anniversaries, children’s names, and other basic information is highly discouraged. Of course, strong passwords are also much harder to remember. In fact the strongest passwords disappear after use and are created new the next time, but that’s a different blog entry.

What makes for a strong password anyway?

  • Longer is better – the longer it is the harder it is to guess or break
  • Use non-sensical letters or words – “treeball” or “xiqjlkr” are much better than “Jane”
  • Mix letters (uppercase and lowercase), numbers, and symbols, if at all possible
  • Don’t use information about you that is discoverable such as names, places, or dates
  • Don’t use sequences such as 12345, abcde, 5555, or qwerty
  • Don’t reuse it – each site or system gets it’s own password
  • Don’t write it down…anywhere – as soon as it’s written it’s available for anyone

Want to test your password strength? Try this site.

In any case, what prompted me to write this entry is the pain of passwords. Every site and system has different rules such as 6 alphabetical characters at most, 4 numbers only, and no more than 8 characters that must include at least 1 number not at the beginning or end. They are more than happy to remind me during registration of their unique rules and not let me move forward until I follow them. However, when it comes time to recall the password the rules are far from sight. If the webmasters are going to require strong passwords (which is good), then at least tell me at the login screen what the rules were. Does it require a number or a length? Tell me that. It will reduce my frustration and has no impact on security.

While I’m on this rant, let me say that any internal corporate system should use any one of a number of Single Sign-on approaches. For example, requiring users to log in to Windows, then into the intranet, then into the LMS, and then into the course almost guarantees that learners are going to drop out before they start the course. The systems should already know who I am from the first time I was authenticated. From there, I should be able to click one link/item/button and be directly into the part of the course that I need right now.

There are plenty of other security approaches that avoid passwords all together, but for now passwords are still the most common security approach. So, as long as we’re still doing security this way, can the programmers and system designers at least help the users keep the system secure by not having to write down all of the passwords? Providing the required format on the screen where the passwords are requested is one step in the right direction.


Volunteers as New Recruits

Over my career, I’ve been fortunate to work for 3 companies that were all at the time (and some still) ranked as the best companies to work for in the U.S. For anybody that’s been involved in the rankings, the award is a great honor and, as like all rankings, has a few problems but that’s not the point of this blog entry. In any case, it can be said that the companies on the list are doing a lot of things right and can serve as great examples for a variety of ideas that other companies can implement. Many criteria are used to determine these rankings such as company culture, employee benefits, training hours, community involvement, and even hiring processes.

I know I’ve written about them a couple times now (Double Your Costs to Save Money, Be Your Own SME) , but if Menlo (a custom programming shop here in Ann Arbor, Michigan) were just a little bit larger, they should most definitely apply. I’m quite sure they’d win if not for all of the other things they do well most certainly for what turns out to be a combination of recruiting, marketing, and community service programs. Much like all of the great companies to work for, Menlo has people knocking down the doors to work there, far more than they could ever employ. So many so, that people began asking to work there for free! Imagine that. People love the company so much that they’d be willing to show up for free. Of course, Michigan employment law won’t allow that, so Menlo came up with a neat solution. The volunteer corp works up to 3 mornings a week (according to their own schedules) on pro bono projects for non-profits. There are tons of benefits to the program:

Non-profits

  • Software that they need, but yet could never afford

Menlo

  • Giving back to the community while focusing their paid resources on revenue generating projects
  • Quick and easy access to a potential pool of employees who are pre-trained, know the company processes, and who’s skills are known completely before hiring
  • Spreading the word that software development doesn’t have to be bad
  • Those that don’t become employees likely still become evangelists and even possible customers

Volunteers

  • Free on-the-job training (no prerequisite knowledge or experience is required)
  • Work experience, resume enhancements, and recommendations (as appropriate)
  • Free access to all Menlo classes (often over $600 each)
  • Giving back to the community

The cost to Menlo are the resources (computers, desks, etc) and the training on their processes. All volunteers are expected to use and follow the quite unique Menlo processes and represent themselves professionally in front of the non-Profits as liaisons for the company.

What’s fun is that people are practicing real work in a safe environment. In some ways, this is one of the best simulations ever.


User Documentation as a Bug

In a recent e-mail, a colleague noted that “we should view user documentation as a bug.” From an interface design standpoint he’s completely right. If it needs documentation, it probably wasn’t designed right. In the past, this documentation often was often printed out (or handed out at training) and quickly put on the shelf where it was never referenced again. More modern tools have moved that to online help and performance support. Now, I’m a big believer in performance support as a big part of learning. So, I’m by no means saying we shouldn’t have it. System designers often rely on the documentation and support tools to cover for bad design…just follow the 10 easy steps in the manual. If the steps were that easy the manual wouldn’t be needed in the first place.

So, a good user interface (whether it be a computer system, learning management system, or an online course of any sort) must meet 3 requirements:

  1. Discoverability (i.e. How do I know that the function exists?) A key part of interface design is making sure people can find the various functions that they need. Often, these features are segmented by frequency of use and volume of users. A function that is frequently used by a majority of the people should be much more prominent than one that is used by an expert or administrative user. This means making these functions the most simple and most automated rather than programming for the exceptions which is often the case. By making frequently used functions easy-to-find, documentation becomes unnecessary.
  2. Functionality (i.e. How do I know how it works?) It’s not enough to be able to find the function in the interface. It must also be intuitive to know how to use it. Take adding a picture or a table to a Word document. Why can’t I just grab the picture, resize it, and drag it to the exact position I want it in. In stead I have to go through a dozen clicks to get the picture looking the way I want. More importantly, the reason I know it is because I’ve had to do it a lot. Try training somebody on page layout in Word…”Well, to get it where you want it, you really should use a table that’s invisible. Then you adjust the rows and columns so that the picture goes where you want it to go.” That’s intuitive.
  3. Outcomes (i.e. How do I know what it’s going to do?) Finally, one I know where to find it and how it works, the outcomes have to be predictable. Ever insert text into Word and all of a sudden the font is different than all of the surrounding text? At least in the old WordPerfect, reveal codes could help figure out what was going on. The outcome of typing text, the most basic function in Word, should be completely predictable. The same is true of any other software we design.

If the design is done well, each of these should be obvious. That doesn’t mean that there shouldn’t be documentation of a sort. In these cases, I still don’t believe in a paper manual (or even an online manual). Instead, a well designed performance support system is what is needed especially for the less frequently used functions or the new learners. The performance support system should be optimized to deliver quick answers. In additions it should include functions for “show me how” and “let me practice”.

While we should do performance support systems, if we’ve done our design right, hopefully no one will ever use it.


Individual Tools vs. Integration

I just got back from Microsoft’s “Ready for a New Day” launch of Vista, Office ’07, and Exchange ’07. It struck me as I was sitting there, the shear volume of integration of the various software and systems. It’s certainly easy to slam on Microsoft for their slow development cycles, less-than-fault-tolerant operating systems, and heavy-handed near-monopoly. Yet, one of the things a monopolist can (and should) do better than anybody else is integration.

As much as I work with and try all sorts of “Web 2.0” or “e-Learning 2.0” software, none if it is able to accomplish the same things yet and that may be inherent with the process. Given it’s intimate knowledge of the entire platform, Microsoft should able to make all of the pieces and parts play together nicely. I remember back in 1987 (my first year as an official IT person), Microsoft released Windows 2.0 and Dynamic Data Exchange (which evolved into OLE, COM, and ActiveX) which became the basis for the now standard functions of drag-and-drop and cut-and-paste between applications. Before that time, Word and Excel wouldn’t talk to each other at all. Today, not only do Word and Excel talk to each other, but they talk to every other program including e-mail, blogs, wikis, web services, and on and on and on.

There’s a lot of work going on in both open source and competitors to Microsoft, and frankly, they are some of the most innovative work…often much more innovative than Microsoft. Yet, in the business environment, we need to balance both innovation and integration. The more our systems talk seamlessly to each other, the lower our costs and the faster we can move. In a corporation, we don’t need someone to design yet another wiki or blog. What we need is someone to make the wikis or blogs we’ve got play well with everything else. (Enter Microsoft’s new version of SharePoint).

So far, the “smaller” players have yet to work together to create integrated solutions. Certainly Google has developed a lot of very cool tools that I use on a daily basis (this blog included). Yet, their blog, calendar, search, word processor, and spreadsheet don’t talk to each other…at least not that much, and Google is the biggest in this space right now. Imagine how difficult it is for the smaller developers.

Yes, moving to web services is a step in this direction and Microsoft has a lot to do to move even further this way. However, web services, Web 2.0, XML, and all the rest don’t do any good if everything is built as a stand alone application. I don’t need another portal page with 1,000 different unrelated objects on it. I need a page that is integrated. Tell me how the weather is going to impact my sales forecast. Tell me how the stock price or news announcements of my competitors correlates with my business results. I hate to say it, but maybe standards are the answer. Standards can help disparate developers create things that work well together without having to know anything about each other. On the other hand, most standards that I’ve observed take a long time to reach consensus and in some ways serve to stifle innovation since the truly innovative project often won’t comply with the standard.

Anyway, whether we want it to or not, Microsoft is going to continue to have a major impact on Information Technology and Learning for the foreseeable future. It’s probably best that we all get to know it as well as we can in order to leverage what it can do for our organizations. If you want to sign up for one of these sessions, there are still a couple dozen sessions left around the country before the end of the year. The Detroit one sold out. I don’t know the status of the others. The big draw is a free copy of Office 2007 Professional (Word, Excel, PowerPoint, Access, Outlook, and Publisher) and a free T-Shirt. Whether a Microsoft or open source fan, it’s likely that the Office suite will still be the dominant tool set for quite a while, especially in corporations. So, why not pick up a free copy? For those open source fans, think of it this way, at least Microsoft won’t get an extra US$499 retail (or US$329 upgrade).


Animatics – Storyboard v2.0

Much like a system design document is for programmers, the storyboard guides the development process for asynchronous e-learning (CBT, WBT, etc). In fact, they share a lot of the same characteristics. The storyboard lays out all of the screen elements including text (often with the full script), images (usually in sketch form), interactivity, and any other functionality. Often created in Word, the storyboard often starts with an outline or general flow and develops into a detailed specifications for the artists, programmers, reviewers, and sponsors. However, full-blown storyboards take a lot of time to create and have lots of limitations. Reviewers and sponsors find them hard to follow. Even when the module is designed exactly to the storyboard specifications, they will often say “that’s not what I thought that meant”. Storyboards suffer from an innate problem. They are one media (print) being used to describe another media (online), yet print doesn’t have any of the features of the online environment to help with the description.

Enter the animatic. While animatics have been around since the early days of movies in one form or another, the first person really credited with putting into action is George Lucas. There was a great documentary on A&E called Empire of Dreams. They re-run it occasionally and I expect they will again this May for the 30th anniversary of the original movie. In it, there’s a segment about 50 seconds long that does a great job of telling about his conversion from storyboard to animatic. I would post a copy, but I’d probably get in a lot of trouble. Apparently it can be found on one of the DVD sets as well. Anyway, the artists and programmers weren’t getting what George wanted after multiple attempts using the storyboards as guides. So, George went out and took bunches of clips from old movies and storyboards and pieced them together to create the animatic. What he gained was a sense of scope, timing, speed, and emotion that the storyboards could never depict. One story about the animatic from Star Wars Episode I from Lucas Film can be found on the Star Wars website. A ton of additional animatic examples can be found on YouTube including this one from Raiders of the Lost Ark. The first picture is the scene from the final movie. Imagine the script…”fire shoots out of him”. Where did the fire come from? Out of him from where? In what directions? For how long? What happens next? The animatic (one cell pictured) helps illuminate some of those questions.

While originally designed for movies, animatics bring a lot of power to the e-Learning development cycle. While it still maybe easier to create a basic outline and some of the initial script in Word, moving quickly to an animatic can bring significant cost reductions to the process. Here are just a few:

  1. Easier reviews – From the beginning sponsors, subject matter experts, and quality assurance are all reviewing the module in it’s final format. Spacing, timing, look & feel, and other elements are all present from the beginning. This benefit is somewhat related to the concepts of extreme programming where the module is in working order and available for review at anytime during the development process.
  2. One master document – In the storyboard process, what happens if content changes during the middle of programming or design? Does the storyboard and the module get updated? Which one is the definitive version? By having only one document (the animatic that is evolved into the final module), there’s only one document to update and it’s the real module. This also reduces the version control issues too some extent since there is only one document to ensure is the most current.
  3. Interactivity and motion – Frankly, no storyboard can do this. There’s just no way on paper to effectively demonstrate interactivity, motion, or animation especially if timing of any sort is involved.
  4. Fewer iterations – Iterations take lots of time in money. In fact, iterations are some of the largest costs in projects. Changes to features, functionality, and even the interface can cause significant overruns if not caught early. Using an animatic gets the sponsor, reviewer, and subject matter expert closer to what the module will do more quickly, hopefully exposing those issues at the early stages rather than later.

I’m not suggesting that animatics completely replace storyboards. Often the storyboard can be a great step in between the outline and the animatic. However, when designing the next e-Learning course, think about the use of animatics as a way to better communicate the module’s timing, flow, and spacing than could ever be possible on paper.


Simulation Tools and The State of e-Learning

NexLearn is a custom content developer in the e-Learning field. Specifically, they develop custom scenario-based, branching simulations. (See my earlier posts for more definition on the 4 types of simulations or the 3 approaches to scenario-based simulations.) Through years of doing custom development, they had created their own tools to make the process easier without as much programming. After frequent customer demand, they decided to release those tools to the market in a product called SimWriter. It’s been out for about a year and a half (now on version 1.8) and continues to be the most powerful branching, scenario-based simulation tool on the market. Other tools such as Captivate certainly exist, but SimWriter remains the most full featured in the space. Some of the more sophisticated things it can do are:

  • Loop backs (moving to a point higher in the tree, rather than just lower)
  • Conditional branches that only appear based on meeting certain criteria
  • Editing in Word so that not all team members need the tool
  • Inclusion of video/audio throughout the entire piece
  • Immediate and/or summary feedback to the learner (by objective, if desired)
  • SCORM-compliance
  • Output to Flash so that no special player is needed

However, with all of those great features, in my opinion, it stumbles in two areas: usability and pricing. To include all of those features, the usability suffered a bit. The tool includes a lot of features, some of which are less frequently needed, but all of which still made it to the forefront of the interface. It certainly can be learned, but it’s not as easy to pick up as some of the less featured tools. I understand that in the next major release there will likely be some significant interface improvements. Of course, with all complex tools that reach a somewhat smaller market, the price was placed quite high as well. I think they will see some significant price pressure over the next year or two. Having said that, this is still the best tool on the market for the creation and editing of a full-featured, scenario-based simulation.

I’ve known the team at NexLearn for quite a while. They just released the latest version of their online newsletter Simpact. In this issue, Thomson NETg President Clint Everton and I answer a few questions on the state of e-Learning such as:

What’s changed in e-Learning?
What is the perfect e-Learning experience?
What’s wrong with most e-Learning today?
What’s in store for e-Learning in the future?

Read the article to hear my views and rants on things like measurement, performance support, simulations, and self-publishing. Over the next few weeks, I’ll be posting more detailed versions of those topics, but check out the article for some coming attractions.

1 Comment more...

Copyright © 1996-2010 thcrawford. All rights reserved.
Jarrah theme by Templates Next | Powered by WordPress