Service Desk Software For Those In Need Of Efficiency

sdsneHowever hard we try to create a product that can replace human error, I doubt if it is possible. Despite viewing the demo version of service desk software, I have to wonder if people will not get tired of even this so called efficient system. What if there is even a voice recording system that answers your phone call when you are irate over something? Would that not be a frustrating experience? As it is, people are losing patience over waiting for the various menu options when they call for mobile banking or even to report a problem. Invariably, people wait to hit the number that would bring them a live customer service representative. There is nothing like talking to a person at the other end of the line. The only problem is that person may not think so too especially if people call only with their grievances!

Nevertheless, we now have service desk software that seems to be on the way to replacing personal logging of complaints. In recent times, people have become busier and stressed with little or no time to travel to the shop from where they purchased a product. They want to be able to handle everything over the phone. Therefore, they want to get connected through service desk software so that their complaint may be registered and a ticket may be raised. There is nothing that can be done about the waiting period after that. You can hope that the company has enough manpower to send a representative to your home within the shortest possible time.

A Smooth Workflow With Service Desk Software

The good thing about service desk software is it provides a hassle free work environment for employees. Working on a company website is not an easy process because there are several things to consider. Customers should be pleased and supported in every concern or questions because they can make or break the business. Through the software, these concerns are addressed properly and tasks like reports are organized well. The software often has different features, depending on what is availed by the company website. However, most of the time, service desk software provides a smooth workflow among staff members. Learn more about this at

Since it is automated, every data obtained is preserved and can be used for future references. Problems that arise will not be prolonged for a longer time because it is easily identified right before it created several complications. The dashboard of the software also provides a customized report that is why it is not difficult to go through it. Manual reporting is not created and this minimizes errors. The software is a great tool to ensure that tasks are scheduled, prioritized and done properly. Overall, the service desk software is an important tool to work efficiently and deliver positive results that the company expects.

Star Trek Also Innovated In Its Computer Usage

cuToday┬ácommercial tools are available to create digital models that will stand up during close-up shots. “Over the years the computer graphics community has evolved and gotten better at creating digital models and animating them. Machines are faster, and the software is better. So now more things are possible. You can zoom in on a certain section of any of these CG spaceships and see really nice detail. It doesn’t look as fake as it would have five years ago,” says Jason Turner, senior digital sculptor and project lead at Viewpoint Data Labs (Orem, Utah), which was contracted to help build digital models of the Enterprise and the shuttlecraft. In addition to those spacecraft, Insurrection includes CG versions of the Son’a shuttles, drones, the science vessel and collector, and a holographic ship, which were provided by Blue Sky|VIFX (Los Angeles) and Santa Barbara Studios (Santa Barbara, CA).

Space Shots

Specifically, Viewpoint was enlisted by Santa Barbara Studios–the production company responsible for more than 100 outer space visual effects shots–to craft highly detailed NURBS models of the starship Enterprise and of the shuttlecraft appearing in space. This was done mainly with Alias|Wavefront’s (Toronto) Maya. The modelers also used Alias|Wavefront’s PowerAnimator and Advanced Visualizer, as well as software from Softimage (Montreal) and Nichimen Graphics (Tokyo), all running on various Silicon Graphics (Mountain View, CA) workstations. The result is the first all-CG starfleet in the long history of the science fiction classic, which has traditionally used a mixture of CG models and physical models. “Paramount is able to retire its physical models for the first time for a new fleet of CG space vehicles primed for future missions,” says Bruce Jones, executive producer and vice president of production at Santa Barbara Studios.

For the Enterprise, Viewpoint digitized a six-foot scale model of the starship with FARO Technologies’ (Lake Mary, FL) FaroArm, and then used detailed blueprints of that ship and the shuttlecraft to construct the CG models. Prior to modeling, however, the Viewpoint team had to digitally repair the surface irregularities of the physical model, which incurred substantial wear and tear from years of use.

“The trick was creating a 3D NURBS model of the Enterprise that could be shown all at once, but at the same time you had to be able to zoom right up to the bay and see all the detail,” says Walter Noot, vice president of production at Viewpoint. The decision to build a NURBS model rather than a polygonal model was predetermined by Santa Barbara Studios, which was using Maya. “Maya was fairly new at the time and a little experimental, but because it uses NURBS geometry, it lets you go in as close as you want without seeing faceting, which was a real advantage in modeling these spaceships” Noot explains.

According to Viewpoint’s Turner, Santa Barbara Studios wanted the flexibility of doing a continuous or long shot of the ship in the distance, then coming right into a close-up. “A NURBS model has that flexibility. It can be a low-resolution model in the distance to save on rendering time, and when there’s a close-up, it can have all the detail–without swapping out the model, which is how you might have approached that in the past” he says. “All you have to do is dial up or down the resolution. If you were to use a polygonal model, you’d need at least a low- and high-resolution version.”

To create the illusion of immense scale, intricate details had to be incorporated into the Enterprise model’s geometry, including docks, bays, observation domes, and more than 1200 windows as well as mechanical pieces such as thruster engines, inlets and outlets, vents, and hatches. Therefore, the Viewpoint team first digitized a scale model, then reviewed highly detailed technical drawings. “When [Paramount artists] designed these spaceships, they went into extensive detail, as if they were designing real spaceships” says Noot. However, the drawings did not always match Paramount’s physical model, which required the Viewpoint modelers to improvise. “It was very difficult. When you’re dealing with Star Trek fans, you have to be right on. Some of them know every inch of the ship,” he adds.

The Plot Thickens

bsIn a separate project, Blue Sky|VIFX, the film’s visual effects producer for the interior shots and character animation, created more than 200 CG shots for Insurrection, including the holographic ship, Son’a shuttles, drones, the interior of the science collector used to gather the youth-extending rays from the Ba’ku planet, various Ba’ku landscapes, animated characters (palm pet, hummingbird, and fish), and weaponry.

The Blue Sky|VIFX animators produced about 30 CG shots for a key sequence in which two Son’a shuttles drop scores of drones to shoot tagging devices at the Ba’ku so they can be beamed off their planet. CG drones were used throughout this sequence, from their launch to the close-up battles with the Enterprise crew. Again, Maya was the tool of choice for modeling and animation on SGI Octanes. Rendering was performed with Pixar’s (Richmond, CA) RenderMan.

Additionally, Blue Sky|VIFX engaged Viewpoint to create two NURBS models of Admiral Dougherty’s head to facilitate morphing between the prebattle and postbattle scenes. According to Jim Rygiel, visual effects supervisor for Blue Sky|VIFX, the Viewpoint modeling team scanned the admiral’s face (with and without makeup) using a Cyberware (Monterey, CA) 3D laser scanner to create the initial geometry for the 3D models. They then used proprietary software to transform the data points into the NURBS models. Using Avid Technology’s (Tewksbury, MA) Elastic Reality, RenderMan, and Maya running on SGI O2s, the animators performed the 3D morph. To help bridge the gap between the CG and live action, Viewpoint created 3D accessory surfaces for the character’s ears, teeth, hair, and eyeballs. “It’s difficult to have a guy’s face warp in front of the camera. You can’t do it with makeup” notes Turner.

Computer graphics also played a key role in the climactic confrontation between Picard and Ru’afo aboard the Son’a science collector, the interior of which was created by Blue Sky|VIFX animators. According to production designer Zimmerman, the struggle occurs over a gigantic cavity of metal girders that open up into space. “With a combination of a real four-story set and a computer-generated model of the structure, we created one of the most spectacular-looking interiors we’ve seen on Star Trek” he maintains.

Once again, Maya was used for the animation and modeling, along with Side Effects Software’s (Toronto) Houdini, while the team chose RenderMan for rendering and Silicon Grail’s (Hollywood, CA) Chalice and Discreet Logic’s (Montreal) Inferno for compositing.

“That last shot is pretty wild. We had to build a pyro miniature and meld it with the CG interior of the science ship, then place Picard and Ru’afo [who were filmed against a green screen] in the scene. Getting all that to work was a real challenge,” says Rygiel.

Making 3D Better

m3dbTo people involved with the application of computer graphics in science, the words “firehose of data” will sound familiar. Those were the words used to describe the situation scientists faced in the early ’90s when supercomputers were being used to collect and calculate data, but there were few methods developed for visualizing the data, according to a seminal report published by the National Science Foundation. The situation for scientists then parallels the situation for business people today.

“In the old days, we’d get screens and screens of text and numbers during space missions,” says Butler Hine, president of Fourth Planet (Los Altos, CA), who was working at NASA at the time. “We had to have someone available who was trained to understand that data.”

Then NASA scientists began finding new ways to make the data more easily understood. “Instead of presenting the data as text, we would map the telemetry streams onto CAD models” Hine says. “A solar panel on a space ship would change color depending on its status, for example. If it was green, everything was OK. If it changed to orange or red, it meant the voltage was dropping.” With this model, nearly anyone looking at the computer screen could instantly evaluate the situation from minute to minute as the data streamed in from outer space.

Other scientific visualizations, however, produced abstract graphics and many of the early business visualizations mimicked these abstract visualizations a little too closely, some believe. The abstract representations of business data, the argument goes, does little to help make data more understandable to those who are not statisticians or scientists. “A lot of people have created visualizations of financial data that look like abstract weather patterns, and they’ve gotten stuck there,” says Martin Plaehn, CEO, Viewpoint DataLabs (Provo, UT), now a subsidiary of Computer Associates. Plaehn believes that business people are better served by data that assumes the form of familiar objects.

Obviously, as a provider of 3D models, this approach would serve Viewpoint well. But Plaehn’s is not a lone voice in the wilderness. “I was so dissatisfied with scientific visualization,” says Fernando Diaz, who founded 6D (Honolulu, HI) after working at Microsoft on Excel development and at the University of Washington’s Human Interface Lab. At 6D, he and his team are experimenting with metaphors from nature to create virtual worlds.

The first results of that work appear in a “themed entertainment” restaurant in which people can interact with artwork on 10 fiat-panel displays and navigate through virtual worlds with continually changing themes. “There’s no beginning or end in these worlds,” he says. “Someone can pick up where you left off.” Eventually, he’ll tie data into virtual worlds such as these. “We could show a stock portfolio as a Zen garden, or people could navigate sales data as if on a golf course,” he says. “I always look to nature for answers. In nature, even the most complex systems are represented in ways that six-year-old kids are able to understand.”

Virtual worlds such as these are filled with 3D objects, and having access to 3D models is certainly one of the reasons Computer Associates (CA) recently bought Viewpoint and 3Name3D as well as an exclusive license to distribute the REM Infografica content. These companies are the three leading providers of 3D models that can be used to help build 3D interfaces. Why would CA want to own 3D models and model-making companies? In part, to help build 3D interfaces. “If you have to stop and read the label on a generic icon to know what it represents, what does 3D do for you?” asks Anders Vinberg, vice president of research and development for CA (New York). “However, if you can immediately recognize the object, you have an immediate, intuitive response.” Four years ago, Vinberg led a team that designed an intuitive, 3D interface for CA’s Unicenter software that is used to monitor large, worldwide computer networks.

Seeing What’s There

In a typical Unicenter application, at the top-most level you might see a screen with a map of the United States. The cities that have offices connected to a company’s network are immediately obvious because a collection of small buildings appears in each city’s location. For each location, a small colored sphere shows the status of the machines in the buildings. A red sphere might indicate serious trouble, a yellow sphere might be a warning. Click on a location, and a closer view of the company’s buildings zooms into view with the colored sphere now indicating where to dive next. You can keep diving until you reach the source of the trouble–all the way inside a computer to see, for example, a disk drive. At each step along the way, the 3D models match objects in the real world. “We have a service operation that takes photographs of the buildings and constructs the models” Vinberg explains.

The project has been “enormously successful,” according to Vinberg who claims that Unicenter is a $2 billion a year product now. “We were surprised that no one had done this before,” he says.

This ability to represent data dynamically using real-world 3D objects is also being applied in other areas by CA. For example, a food chain in England uses Unicenter to monitor and manage the refrigerators and air conditioners in several stores. A system manager in the company’s headquarters sees models of stores onscreen, and by “diving deeper” sees models of display cases in those stores. Similarly, the system is being used in a hospital to monitor computer equipment. “One time the system manager noticed that a computer wasn’t working” Vinberg says. “He called the hospital, and they discovered that a janitor had unplugged the computer to sweep behind it.”

“What all these applications have in common is complexity” Vinberg says. “A Windows GUI works well in a small environment–like managing the data on your own disk drive. But when the environment becomes more complex, you need another visual paradigm.”

For its nScope product, a plug-in that melds a 3D graphical user interface onto the HP OpenView network-management system, Fourth Planet chose a different method to picture a complex network. Rather than starting with a top-level view, then having users click to dive inside as with CA’s Unicenter, nScope tries to display a network’s entire topology at one time. Appearing onscreen are thousands of icons representing computers/servers with connecting lines between. Any type of data can be mapped onto the icons and onto the connecting lines, according to Hine–performance, for example, might be indicated by the color and size of connecting lines.

To see what’s happening with the network, a system manager would fly through the 3D landscape looking for trouble spots. “Humans have amazing pattern recognition” Hine explains. “If you take someone through a forest, they can tell at a glance where it’s healthy and where it’s not” This ability to recognize patterns in nature applies equally well to seeing problem areas in a network represented as a forest of icons. “With the old style 2D interface, we could show 200 icons at most. Trying to view a network with a 2D screen is like being in a forest with your head frozen in one place;’ he says. “In 3D, we can show 10,000 icons.” Based on EAI’s WorldToolKit (from the Sense8 division in Sausalito, CA), nScope runs on Linux, Solaris, HP, SGI, and Windows NT machines and is priced at $10,000.

3D Metaphors

3dmp3D graphical interfaces based on objects from the real world are also being used to represent other types of data. At Argus, for example, tools created for virtual-reality environments are being applied to data-visualization problems. The company also uses WorldToolKit and has developed proprietary tools to create dynamic worlds that it plans to market later this year. In one application, the Children’s Health Network wanted to explore the relationship between asthma and environmental factors. Argus placed models of small buildings, schools, clinics, factories, and so forth onto a map of the town under examination. Discolored areas on the ground represent levels of contaminants, and at each clinic, icons representing groups of people show the number and seriousness of asthma cases.

Showing relationships within a different sort of database is a 3D metaphor created at CA for a stock brokerage firm. Divekar explains: “A stock broker would like to know in 30 seconds everything she can about a client.” Suppose, for example, the broker gets a phone call from a client. When she types in the client’s name, a snapshot of a 3D environment appears onscreen. Elements in the environment have been generated based on the client’s records so that by looking at the picture, the broker instantly knows a lot about the client. If the space looks like an office, the client is an institutional investor; if it’s a den, he’s a private investor. If he’s a private investor, the quality and type of furniture will indicate such things as his age and financial status. A pair of dice on the desk shows a level of risk-taking. The bookcase is filled with icons representing various types of investments. The broker can click on various elements to get to the actual financial data. CA has created similar metaphors for other types of sales organizations as well. “In the past, IT [information technology] has been about saving money. Now, it’s helping people make money,” Divekar says.

“Making environmental interfaces that sit on top of legacy software is a bold idea;’ says Plaehn,” but it’s not a technology problem. The data exists. We don’t have to rewrite underlying applications. We just have to figure out new ways to package and present the information.”

It’s an idea that’s beginning to be picked up by studios that primarily create multimedia applications as well. VisionFactory (Apex, NC), a multimedia communications company that has specialized in product simulations and Hollywood-style business presentations, is beginning to create interactive 3D interfaces. “In the past, hardware support for playing real-time, interactive 3D hasn’t been available, but that’s changing” says Daniel Lott who heads the visualization center.

Indeed, Hine and his partners at Fourth Planet left NASA for precisely this reason. “What caused us to jump is that ordinary PCs can do the job now,” Hine says. “A Compaq with a Diamond FireGL can run our software.” When Vinberg built the first prototype of the Unicenter interface, it required a $30,000 machine to run. “Now it runs on my kid’s [PC] game machine,” he says.

The leading personal computer vendors are now shipping “3D-enabled” machines, and Intel is making a big push for 3D on the Internet as a marketing stratagem for its Pentium III processor. The marketing clout that Intel can muster is convincing several Internet developers to add 3D to Web sites, and the search engine Excite now boasts a 3D interface.

Spatial User Interfaces

If 3D interfaces such as these are successful, businesses will be more likely to demand 3D SUIs (spatial user interfaces) rather than 2D GUIs, and the whole face of business computing will change. That opens additional opportunities for such software companies as Advanced Visual Systems (AVS), EAI’s Sense8 division, SuperScape, Division, MultiGen, and others, which have already seen their tools beginning to be used for these applications. In addition, companies that offer authoring tools, 3D modeling and animation software, and, of course, 3D models will benefit. New types of tools such as Shell Interactive’s 3D Dreams and, more recently, Virtus’ OpenSpace3D for Director will become increasingly important. And artists and animators with imagination and general (rather than specialized) skills who can create these new 3D worlds will find new markets for their talent.

Graphics Types For The Layman

gtflVector graphics consist of objects that are defined by a set of mathematical formulas and screen coordinates. These objects include lines, rectangles, ellipses, arcs, curves and text. The exact placement and size of each object, including its color, is determined by the exact mathematical formula. Mechanical drafting and blueprints are good examples of vector-based graphics.

As mathematically based images, vector graphics can easily be transformed without affecting the original graphic quality. They can be enlarged and rotated, or even twisted into various shapes without losing any image quality. This feature is often referred to as resolution independence.

Raster-based graphics are composed of pixels that are arranged in columns and rows. Each pixel represents either black or white, a shade of gray or color, depending on the number of colors the image file format is capable of representing.


Bitmap graphics are defined by the number of colors they are capable of representing and their size in pixels. An 800×600 24-bit color bitmap, for example, is 800 pixels wide, 600 pixels high, and is capable of representing 16.7 millions different colors.

Unlike vector graphics, bitmap graphics are not resolution independent. Image quality is directly related to the number of pixels in each bitmap. In other words, the more pixels there are, the higher the quality. Also unlike vector graphics, bitmap images can not be scaled up or down in size without a significant loss in image quality.

Today’s hybrid applications bridge the graphics incompatibility by treating objects and text as vector-based objects until they are merged with the underlying bitmap. When an image is created, the application creates an underlying bitmap that has the exact dimension needed for final output. The bitmap can be filled with a photograph, a color gradient or a solid color.

Once the underlying bitmap has been created, vector-based objects and text are created, resized and arranged on top of it. The vector-based objects remain editable until they’re converted to pixels and merged with the underlying bitmap, which is referred to as rendering the image. The vector-based objects are turned into bitmaps and take on the resolution of the underlying image.

In PhotoImpact 4.0, for example, text is treated as vector based, and can be distorted, moved around the screen and resized. Text is also treated as three-dimensional objects that have very sophisticated lighting effects and surface texture. The PhotoImpact image, for instance, was created as a two-dimensional object, converted into 3D, painted with a gradient color and rounded into a tube shape. The drop shadow was added for effect.


spSeveral applications take this procedure a little further. Instead of requiring that an underlying bitmap image be created at the onset, these applications let users work entirely with resolution-independent objects and text until the image is ready for output. Only then are the text and objects rendered into a bitmap and sent to the printer. The Star Services image, for instance, was created in Satori PhotoXL 2.5 using resolution-independent, vector-based objects and text. The vector-based graphics were generated on independent layers that remained editable until the final output.

Satori PhotoXL 2.5 and Ron Scott’s QFX both provide texture-mapping utilities that integrate the use of bitmap images onto resolution-independent, vector-based objects. They do this by painting the images onto the object surfaces. Once the image is in place, the object can be resized and distorted into any shape.

Hybrid image editing applications also save lots of time by providing an integrated environment in which both bitmap and vector-based image elements can be edited at the same time. Confining editing to one application saves time and reduces confusion. It avoids the time-consuming task of swapping files between applications.

These hybrid programs make it possible to work with very large files that have multiple layers and numerous objects, in ways that wouldn’t be possible in traditional image editing programs. In Satori PhotoXL 2.5, for example, it’s possible to use brushes as large as 3000 pixels wide in real time; an approach that would slow conventional image applications to a crawl.

Hybrid image-editing applications also have extensive undo capabilities. In fact, several let users selectively undo one or more editing actions from an ongoing list without adversely affecting the rest of the image. These applications provide a rich environment in which both bitmap and vector-based images can successfully be integrated without compromising the quality or resolution of either.

Dummies Digital Video Editing??

dvedI dread the day my mother buys a PC-based video editing system. After all, looking back, we almost didn’t make it through AOL’s mail program (“No, Mom, just because I use email doesn’t mean I know how to use every, email program in the universe.”), or even the Internet itself (“No, Mom, the Internet doesn’t ship with a manual.”). I can’t even begin to imagine a politically correct way to suggest to my mother that she purchase Digital Video for Dummies.

Having just reviewed for another magazine four consumer-oriented video editing programs–Avid Cinema, MGI VideoWave, Pinnacle Systems’ Studio MP10, and Ulead’s VideoStudio–I sadly must conclude that digital video is still beyond most consumers. Not because the ideal interface for storyboarding, capturing, editing, and encoding hasn’t been invented yet, but because no one developer has molded these ideal interfaces into one product.

Which is a great opportunity for all four companies if they’re willing to utilize Jan Ozer’s theory of software development: to wit, “Steal all of your competitors’ best ideas unabashedly unless they’re patented or copyrighted.” Okay, Bill Gates thought of it first, but I’ve used it to good effect many times.

Building the perfect beast

Without question, if I were building a video editor for Mom, I would start with Avid Cinema, the only editor of the bunch that figured out that I don’t want to be explaining things like codecs, resolutions, and frame rates to Mom. Instead, she can choose between high- or low-quality capture, output for CD-ROM or the Internet, and let Cinema make all the decisions. Not a codec or 320 x 240 to be found.

Even better, Avid includes canned storyboards for events like birthdays, customer testimonials, and school projects for those of us who weren’t born with Steven Spielberg’s gifts for cinematic storytelling. If you can connect the dots, you can create a video with both technical and artistic merit.

In use, however, Cinema has some frustrating limitations. For example, there are no time codes or frame numbers, which complicates precise edits–an often crucial requirement, even for consumers. In addition, under the hood, Cinema makes some questionable decisions, like choosing the Cinepak codec for CD-ROM distribution or capturing at 720 x 480 when distributing over the Internet.

This means the video isn’t as good as it could be, and believe me, Mom won’t stand for that, even if the program is brain dead-easy to use. So I’d step outside of the video-editing arena and steal some ideas from RealNetwork’s RealPublisher, which uses a series of non-technical queries to configure your video project.

I’d start by asking Mom a series of questions like “What computers do you want to play these videos back on (processor speed)?” and “How do you plan to get the videos to your viewers (Internet, intranet, or CD-ROM)?” Then I would go beyond Cinema and customize all capture, editing, and output decisions based on her answers.

The features I’d want if I had a say

mgivwI’d also expand Cinema’s feature set, perhaps taking a page from MGI’s VideoWave II, which is extremely feature-rich, with tons of presets and canned controls that make each feature very easy to apply. For example, VideoWave makes it simple to overlay a graphic over your video (which Cinema can’t do) and can vary special effects over time–another great feature.

But MGI is the only editor that doesn’t offer a timeline, relying solely upon a storyboard interface that complicates effects lasting longer than one clip, like background audio tracks or overlaying your company logo over the entire production. That’s why we liked VideoStudio’s interface, which offers both storyboard and timeline views.

In addition, VideoStudio can theoretically build movies of unlimited length, where MGI is limited to about ten minutes–certainly insufficient if Mom tries to glom all those 16mm films into the “How My Bratty Son Ruined My Life” or some equally appropriate movie. I say “theoretically” because all editors work best when coupled with a known capture card, like in a bundling arrangement, but that hinders retail sales.

That’s because Microsoft hasn’t cleaned up and enforced its low-level DirectShow interface so that all boards and software are both highly functional and interoperable. So to make Mom happy–and keep me sane–Microsoft’s also gonna have to clean up its act. Alternatively, the development community should switch over to QuickTime 4.0 on the Windows platform, which offers superior compatibility.

Finally, Mom’s gonna need a parallel port solution like Pinnacle’s Studio MP10, because I’m leaving the country if she ever tries to install an internal capture card. Otherwise, she should purchase a computer with integrated FireWire, like Sony’s VAIO, which offer incredible value for anyone interested in home or prosumer video editing.

Overall, at this point, Mom’s best choice is Avid Cinema, but only if she buys it bundled with a compatible capture card like Matrox’s Marvel G-200. Otherwise, I’d tell her to stick to knitting.

No one knows whether consumers will ever embrace digital video to get more from the endless tapes we shoot of friends and family. But we do know that consumers won’t wade through a morass of new jargon to create their videos, and certainly won’t relinquish functionality for ease of use.

Web Animation – Better Than Ever!

The key drivers of the Web animation market are the continued growth of the Internet, the enhanced capabilities of CPUs, and the high expectations of e-commerce applications,” says Norvin Leong, an analyst with Frost & Sullivan, which recently released a report on the use of animation on the Web. “Conversely, major challenges manifest themselves in the slow deployment of high-bandwidth pipes, steep learning curves, and the dearth of standards.


3dmAnother challenge lies in getting more consumers to spend money over the Internet Despite total 1998 on-line sales exceeding $8 billion, many people are still hesitant to purchase items on the Web.

Although past attempts at popularizing 3-D in the consumer market have failed, its benefits for e-commerce may help to bolster the use of animation on the ‘Net, where photorealistic 3-D models would enable potential buyers to more closely examine a product before making a purchasing decision.

Furniture manufacturer Herman Miller, Inc. (, Zeeland, Mich., employs the use of animation in its Web site, which features a room-planning mechanism. A compilation of electronic space-arranging tools created in Macromedia Director, the Room Planner enables browsers to create a custom-made virtual room that is formatted to the dimensions of their own home.

Visitors can decorate with 154 different pieces of Herman Miller furniture, various windows and doors, and even pets scaled to size.


According to Ray Kennedy, general manager of the on-line store, the Room Planner represents one of the first instances in which Shockwave, Macromedia’s streaming file format, has been used for business purposes rather than entertainment such as online games.

“The Room Planner makes designing and furnishing a room in your home office simple, fun, and fast,” Kennedy states. “The key premise of the program is to make buying easier for our customers.”

Says Kennedy, the combination of Director and Flash assets keeps the file size of the Room Planner small: even with more than 100 items and many sound rollover effects, it takes up only 430 kilobytes of disk space. Flash assets also allow users to rotate and scale items while retaining a smooth look to the graphics.

While the Herman Miller site today relies largely on 2-D animation, 3-D graphics are likely in its future.

“At this point, bandwidth is the biggest problem,” explains Cort Langworthy, executive director of new media for Big Theory (, the Dallas-based digital design agency that created the furniture manufacturer’s site. “But I think 3-D animation will allow users to get a much better idea of what the physical object looks like. As the pipeline opens up and machines get faster, we’ll be able to create a more sophisticated and enticing experience for users.”