Saturday, December 31, 2005

Dictionary.com/single-mindedness: "sin·gle-mind·ed (snggl-mndd)
adj.

1. Having one overriding purpose or goal: the single-minded pursuit of money.
2. Steadfast; resolute: He was single-minded in his determination to stop smoking."

MacDailyNews - Apple and Mac News - Welcome Home: "Now for the ultimate: Hold down Command-Control-D as you move your cursor over multiple words and watch what happens! (Bonus secret: you can let go of the D key and as long as you continue holding Command-Control, it'll keep working.)

It works in text input boxes, too. So, no more misused, misspelled words for Tiger Safari users in your Reader Feedback comments, okay?"

Miguel de Icaza: "Although raw performance is good for some applications they impose a heavy toll on the developer. A professionally written and hand tuned assembly language program will likely perform better than anything else generated by a compiler.

The language and runtime choice is a tradeoff that developers make. A balance between the time available for releasing the product; the budget available for creating and maintaining the application; the target system requirements; any third party libraries and components required; the in-house expertise; availability of developers with knowledge to develop and maintain the code; language learnability; the project life-span and the requirements that it might impose on the project: from languages designed to maintain software over a large period of time to write-once, barely-touch-afterwards software."

.NET General

interview with Miguel de Icaza

Dare Obasanjo: A number of parties have claimed that the Microsoft NET platform is a poor clone of the Java™ platform. If this is the case why hasn't Ximian decided to clone or use the Java platform instead of cloning the Microsoft .NET platform?

Miguel de Icaza: We were interested in the CLR because it solves a problem that we face every day. The Java VM did not solve this problem.

Mac OS X running on my desktop pc

Mac OS 10 version 10.4.3, running on my AMD64 3000+ MSI ATI RS 480 mobo




also installed Microsoft Office 2004 for Mac. tried the macros and VB for Application, the same with its Windows cousin. vba all the same, except wala lang intellisense during coding.






Universal Serial Bus - Wikipedia, the free encyclopedia: "Transfer speed

USB supports three data rates.

* A Low Speed rate of 1.5 Mbit/s (183 KiB/s) that is mostly used for Human Interface Devices (HID) such as keyboards, mice and joysticks.

* A Full Speed rate of 12 Mbit/s (1.4 MiB/s). Full Speed was the fastest rate before the USB 2.0 specification and many devices fall back to Full Speed. Full Speed devices divide the USB bandwidth between them in a first-come first-served basis and it is not uncommon to run out of bandwidth with several isochronous devices. All USB Hubs support Full Speed.

* A Hi-Speed rate of 480 Mbit/s (57 MiB/s). (Commonly called USB 2.0)"

Universal Serial Bus - Wikipedia, the free encyclopedia: "On Microsoft Windows platforms, one can tell whether a USB port is version 2.0 by opening the Device Manager and checking for the word 'Enhanced' in its description; only USB 2.0 drivers will contain the word 'Enhanced.' On Linux systems, the lspci -v command will list all PCI devices, and a controllers will be named OHCI, UHCI or EHCI respectively, which is also the case in the Mac OS X system profiler. On BSD systems, dmesg will show the detailed information hierarchy."


How Google Grows...and Grows...and Grows


... Its performance is the envy of executives and engineers around the world ... For techno-evangelists, Google is a marvel of Web brilliance ... For Wall Street, it may be the IPO that changes everything (again) ... But Google is also a case study in savvy management -- a company filled with cutting-edge ideas, rigorous accountability, and relentless attention to detail ... Here's a search for the growth secrets of one of the world's most exciting young companies -- a company from which every company can learn.

On Tuesday morning, January 21, the world awoke to nine new words on the home page of Google Inc., purveyor of the most popular search engine on the Web: "New! Take your search further. Take a Google Tour." The pitch, linked to a demo of the site's often overlooked tools and services, stayed up for 14 days and then disappeared.
*

*





To most reasonable people, the fleeting house ad seemed inconsequential. But imagine that you're unreasonable. For a moment, try to think like a Google engineer -- which pretty much requires being both insanely passionate about delivering the best search results and obsessive about how you do that.

If you're a Google engineer, you know that those nine words comprised about 120 bytes of data, enough to slow download time for users with modems by 20 to 50 milliseconds. You can estimate the stress that 120 bytes, times millions of searches per minute, put on Google's 10,000 servers. On the other hand, you can also measure precisely how many visitors took the tour, how many of those downloaded the Google Toolbar, and how many clicked through for the first time to Google News.

This is what it's like inside Google. It is a joint founded by geeks and run by geeks. It is a collection of 650 really smart people who are almost frighteningly single-minded. "These are people who think they are creating something that's the best in the world," says Peter Norvig, a Google engineering director. "And that product is changing people's lives."

Geeks are different from the rest of us, so it's no surprise that they've created a different sort of company. Google is, in fact, their dream house. It also happens to be among the best-run companies in the technology sector. At a moment when much of business has resigned itself to the pursuit of sameness and safety, Google proposes an almost joyous antidote to mediocrity, a model for smart innovation in challenging times.

Google's tale is a familiar one: Two Stanford doctoral students, Sergey Brin and Larry Page, developed a set of algorithms that in 1998 sparked a holy-shit leap in Web-search performance. Basically, they turned search into a popularity contest. In addition to gauging a phrase's appearance on a Web page, as other engines did, it assessed relevance by counting the number and importance of other pages that linked to that page.

Since then, newer search products such as Teoma and Fast have essentially matched Google's advance. But Google remains the undisputed search heavyweight. Google says it processes more than 150 million searches a day -- and the true number is probably much higher than that. Google's revenue model is notoriously tough to deconstruct: Analysts guess that its revenue last year was anywhere from $60 million to $300 million. But they also guess that Google made quite a bit of money.

As a result, there is constant, hopeful speculation among financiers around an initial public offering, a deal that could be this decade's equivalent of the 1995 Netscape IPO. A few years back, such a deal might have valued Google at $3 billion or more. Even today, a Google offering might fetch $1 billion.

For now, though, most of the cars in the lot outside Google's modest offices in a Mountain View, California office park are beat-up Volvos and Subarus, not Porsches. And while Googlers may relish their shot at impossible wealth, they appear driven more by the quest for impossible perfection. They want to build something that searches every bit of information on the Web. More important, they want to deliver exactly what the user is looking for, every time. They know that this won't ever happen, and yet they keep at it. They also pursue a seemingly gratuitous quest for speed: Four years ago, the average search took approximately 3 seconds. Now it's down to about 0.2 seconds. And since 0.2 is more than zero, it's not quite fast enough.

Google understands that its two most important assets are the attention and trust of its users. If it takes too long to deliver results or an additional word of text on the home page is too distracting, Google risks losing people's attention. If the search results are lousy, or if they are compromised by advertising, it risks losing people's trust. Attention and trust are sacrosanct.

Google also understands the capacity of the Web to leverage expertise. Its product-engineering effort is more like an ongoing, all-hands discussion. The site features about 10 technologies in development, many of which may never be products per se. They are there because Google wants to see how people react. It wants feedback and ideas. Having people in on the game who know a lot of stuff tells you earlier whether good ideas are good ideas that will actually work.

But what is most striking about Google is its internal consistency. It is a beautifully considered machine, each piece seemingly true to all the rest. The appearance of advertising on a page, for example, follows the same rules that dictate search results or even new-product innovation. Those rules are simple, governed by supply, demand, and democracy -- which is more or less the logic of the Internet too.

Like its search engine, Google is a company overbuilt to be stronger than it has to be. Its extravagance of talent allows it crucial flexibility -- the ability to experiment, to try many things at once. "Flexibility is expensive," says Craig Silverstein, a 30-year-old engineer who dropped his pursuit of a Stanford PhD to become Google's first employee. "But we think that flexibility gives you a better product. Are we right? I think we're right. More important, that's the sort of company I want to work for."

And the sort of company that every company can learn from. What follows, then, is our effort to "google" Google: to search for the growth secrets of one of the world's most exciting growth companies. Like the logic of the search-engine itself, our search was deep and democratic. We didn't focus on Google's big three: CEO Eric Schmidt and founders Brin and Page. Instead, we went into the ranks and talked with the project managers and engineers who make Google tick. Here's what we learned.
Rule Number One: The User Is in Charge

"There are people searching the Web for 'spiritual enlightenment.' " Peter Norvig says this with such utter solemnity that it's impossible to tell for sure whether he gets the irony. Then again, Norvig is the guy who authored a hilarious PowerPoint translation of Lincoln's Gettysburg Address (available at www.norvig.com), a geek classic. So maybe he's having fun.

But he's also making a point. When someone enters a query on Google for "spiritual enlightenment," it's not clear what he's seeking. The concept of spiritual enlightenment means something different from what the two words mean individually. Google has to navigate varying levels of literality to guess at what the user really wants.

This is where Googlers live, amid semantic, visual, and technical esoterica. Norvig is Google's director of search quality, charged with continuously improving people's search results. Google tracks the outcome of a huge sample of the queries that we throw at it. What percentage of users click on the first result that Google delivers? How many users click on something from the first page? Norvig's team members scour the data, looking for trouble spots. Then they tweak the engine.

The cardinal rule at Google is, If you can do something that will improve the user's experience, do it. It is a mandate in part born of paranoia: There's always a chance that the Google destroyer is being pieced together by two more guys in a garage. By some estimates, Google accounts for three-quarters of all Web searches. But because it's not perfect, being dominant isn't good enough. And the maniacal attack on imperfection reflects a genuine belief in the primacy of the customer.

That's why Google must correctly interpret searches by Turks and Finns, whose queries resemble complete sentences, and in Japanese, where words run together without spaces. It has to understand not only the meanings of individual words but also the relationships of those words to other words and the characteristics of those words as objects on a Web page. (A page that displays a search word in boldface or in the upper-right-hand corner, for example, will likely rank higher than a page with the same words displayed less prominently.)

It's why the difference between 0.3 seconds and 0.2 seconds is pretty profound. Most searches on Google actually take less than 0.2 seconds. That extra tenth of a second is all about the outliers: queries crammed with unrelated words or with words that are close in meaning. The outliers can take half a second to resolve -- and Google believes that users' productivity begins to wane after 0.2 seconds. So its engineers find ways to store ever-more-arcane Web-text snippets on its servers, saving the engine the time it takes to seek out phrases when a query is made.

And it's why, most of the time, the Google home page contains exactly 37 words. "We count bytes," says Google Fellow Urs Holzle, who is on leave from the University of California at Santa Barbara. "We count them because our users have modems, so it costs them to download our pages."

Just as important, every new word, button, or feature amounts to an assault on the user's attention. "We still have only one product," Holzle says. "That's search. People come to Google to search the Web, and the main purpose of the page is to make sure that you're not distracted from that search. We don't show people things that they aren't interested in, because in the long term, that will kill your business."

Google doesn't market itself in the traditional sense. Instead, it observes, and it listens. It obsesses over search-traffic figures, and it reads its email. In fact, 10 full-time employees do nothing but read emails from users, distributing them to the appropriate colleagues or responding to them themselves. "Nearly everyone has access to user feedback," says Monika Henzinger, Google's director of research. "We all know what the problem areas are, where users are complaining."

The upshot is that Google enjoys a unique understanding of its users -- and a unique loyalty. It has managed a remarkable feat: appealing to tech-savvy Web addicts without alienating neophytes who type in "amazon.com" to find . . . Amazon.com. (Yes, people really do that. Google doesn't know why.)

"Google knows how to make geeks feel good about being geeks," says Cory Doctorow, prominent geek, blogger, and technology propagandist. Google has done that from the beginning, when Brin and Page basically laid open their stunning new technology in a 1998 conference paper. They invited in the geeks in and made them feel as if they were in on something special.

But they didn't forget to make everyone else feel special too. They still do, by focusing relentlessly on the quality of the experience. Make it easy. Make it fast. Make it work. And attack everything that gets in the way of perfection.
Rule Number Two: The World Is Your R&D Lab

Paul Bausch is a 29-year-old Web developer in Corvallis, Oregon. He works with ASP, SQL Server, Visual Basic, XML, and a host of other geek-only technologies. He helped create Blogger, a widely used program that helps people set up their own Web log. And in a way that's intentionally imprecise, he's part of Google's research effort.

"Isn't this great?" exclaims Nelson Minar, a senior Google engineer. Minar and I are fooling with Bausch's quirky creation called Google Smackdown, where you can compare the volume of Google citations for any two competing queries. (The New York Yankees slam the New York Mets; war conquers peace.) Google loosed Smackdown and other eccentric Web novelties when it released a developer's kit last spring that lets anyone integrate Google's search engine into their own application. The download is simple, and the license is free for the taking.

Here's the scary bit: Basically, those developers can do whatever they want. The only control that Google exerts is a cap of 1,000 queries per day per license to guard against an onslaught that might bring down its servers. In most cases, Minar and his colleagues have no idea how people use the code. "It's kind of frustrating," he concedes. "We would love to see what they're doing."

Most companies would sooner let temps into the executive washroom than let customers -- much less customers who can hack -- anywhere near their core intellectual property. Google, though, grasps the power of an engaged community. The developer's kit is a classic Trojan-horse strategy, putting Google's engine in places that the company might not have imagined. More important, Bausch says, opening up the technology kimono "turns the world into Google's development team."

Sites like Smackdown, while basically toys, "are an inkling of what Google could be used for," Minar says. "We can't predict what will happen. But we can predict that there will be an effect on our technology and on the way the world views us." And more likely than not, it will be something pretty cool.
Rule Number Three: Failures Are Good. Good Failures Are Better.

In Google Labs, just two clicks away from its home page, anyone can test-drive Google Viewer, sort of a motion-picture version of your search results, or Voice Search, a tool that lets you phone in a query and then see your results online. Is either ready for prime time? Not really. (Try them out. On Voice Search, you're as likely to get someone else's results as your own.)

But that's the point. The Labs reflect a shared ethos between Google and its users that allows for public experimentation -- and for failure. People understand that not everything Google puts on view will work perfectly. They also understand that they are part of the process: They are free to tell Google what's great, what's not, and what might work better.

"Unlike most other companies," observes Matthew Berk, a senior analyst at Jupiter Research, Google has said, 'We're going to try things, and some aren't going to work. That's okay. If it doesn't work, we'll move on.' "

In the search business, failure is inevitable. It comes with the territory. A Web search, even Google's, doesn't always give you exactly what you want. It is imperfect, and that imperfection both allows and requires failure. Failure is good.

But good failures are even better. Good failures have two defining characteristics. First, says Urs Holzle, "you know why you failed, and you have something you can apply to the next project." When Google experimented with thumbnail pictures of actual Web pages next to results, it saw the effect that graphical images had on download times. That's one reason why there are so few images anywhere on Google, even in ads.

But good failures also are fast. "Fail," Holzle says. "But fail early." Fail before you invest more than you have to or before you needlessly compromise your brand with a shoddy product.
Rule Number Four: Great People Can Manage Themselves

Google spends more time on hiring than on anything else. It knows this because, like any bunch of obsessive engineers, it keeps track. It says that it gets 1,500 résumés a day from wanna-be Googlers. Between screening, interviewing, and assessing, it invested 87 Google people-hours in each of the 300 or so people that it hired in 2002.

Google hires two sorts of engineers, both aimed at encouraging the art of fast failure. First, it looks for young risk takers. "We look for smart," says Wayne Rosing, who heads Google's engineering ranks. "Smart as in, do they do something weird outside of work, something off the beaten path? That translates into people who have no fear of trying difficult projects and going outside the bounds of what they know."

But Google also hires stars, PhDs from top computer-science programs and research labs. "It has continually managed to hire 90% of the best search-engine people in the world," says Brian Davison, a Lehigh University assistant professor and a top search expert himself. The PhDs are Google's id. They are the people who know enough to shoot holes in ideas before they go too far -- to make the failures happen faster.

The challenge is negotiating the tension between risk and caution. When Rosing started at Google in 2001, "we had management in engineering. And the structure was tending to tell people, No, you can't do that." So Google got rid of the managers. Now most engineers work in teams of three, with project leadership rotating among team members. If something isn't right, even if it's in a product that has already gone public, teams fix it without asking anyone.

"For a while," Rosing says, "I had 160 direct reports. No managers. It worked because the teams knew what they had to do. That set a cultural bit in people's heads: You are the boss. Don't wait to take the hill. Don't wait to be managed."

And if you fail, fine. On to the next idea. "There's faith here in the ability of smart, well-motivated people to do the right thing," Rosing says. "Anything that gets in the way of that is evil."
Rule Number Five: If Users Come, So Will the Money

Google has no strategic-planning department. CEO Eric Schmidt hasn't decreed which technologies his engineers should dabble in or which products they must deliver. Innovation at Google is as democratic as the search technology itself. The more popular an idea, the more traction it wins, and the better its chances.

Here's how one Google service came into the world. In December 2001, researcher Krishna Bharat posted an internal email inviting Googlers to check out his first crack at a dynamic news service. Although Google offered a basic headline service at the time, news was not a corporate mandate. This was simply Bharat's idea. As a respected PhD hired away from Compaq and a member of the company's 10-person research lab, coming up with new ideas is basically Bharat's job.

For an early prototype, it was quite a piece of work. Bharat had built an engine that crawled 20 news sources once an hour, automatically delivering the most recent stories on in-demand topics -- something like a virtual wire editor. And within Google, it got a lot of attention. Importantly, it attracted the attention of Marissa Mayer, a young engineer turned project manager.

Mayer connected Bharat with an engineering team. And within a month and a half, Google had posted on its public site a beefed-up version of the text-based demo, which is now called Google News and which features 155 sources and a search function. Within three weeks of going public, the service was getting 70,000 users a day.

One reason Google puts its innovations on public display is to identify failures quickly. Another reason is to find winners. For Bharat and Mayer, those 70,000 users provided ammunition to build a case for News within Google. "A public trial helps you go fast," Mayer says. "If it works, it builds internal passion and fervor. It gets people thinking about the problem."

Soon, Mayer had marshaled a handful of engineers to bulk up News. They expanded its reach to more than 4,000 sources, updated continuously instead of hourly. They created an engine that was robust enough to support five times the anticipated early volume. And they prettied it up, designing an interface that displayed hundreds of headlines and photos but that was still easy to navigate. By September, the new News was up.

Is Google News an actual product? Not exactly. Its home page is still labeled Beta, as are all but a few of Google's offerings. It may become a Google fixture, it may disappear, or it may recede into Google Labs. Mayer is still studying the traffic, and the engineers are still tweaking, reacting to users' emails.

The company's organic approach to invention bugs some onlookers. "Google is a great innovator," says Danny Sullivan, editor of Search Engine Watch and an influential commentator. "They keep rolling out great things. But Google News was an engineer deciding he wanted a news engine. Now Google has this product, and it doesn't know how to make money off of it."

Sullivan is onto something important: At some point, all of this great stuff has to turn a profit. That was the one great moral of the dotcom blowout: "Monetizing eyeballs" turned out to mean "throwing money down a sinkhole." When Mayer argues that "the traffic will let us know" whether News is a success, she's echoing a long line of now-unemployed executives who thought that they had tamed the business cycle.

But at Google, building and then following the traffic makes perfect sense. It's central to the company's culture and its operating logic. Consider this: For the first 18 months of its existence, Google didn't make a penny from its basic Web-search service. Only then did it make the transition from great technology to great technology with a critical mass of users.

And Google was able to package that traffic in ways that seem both ingenious and completely synchronous. The search service itself remained free. But Google has, for example, sold untold numbers of ads pegged to specific search keywords. (Not surprisingly, Fast Company slips in a paid ad to the side of your results whenever your query includes fast company.)

Advertisers don't just pay a set rate, or even a cost per thousand viewers. They bid on the search term. The more an advertiser is willing to pay, the higher its ad will be positioned. But if the ad doesn't get clicks, its rank will decline over time, regardless of how much has been bid. If an ad is persistently irrelevant, Google will remove it: It's not working for the advertiser, it's not serving users, and it's taking up server capacity.

This is how it is at Google. Google News attracted eyeballs among Bharat's employees, so it made the leap to the public domain. If enough users like it, it will have real power with advertisers. And traffic for advertisers will beget even more traffic for advertisers.

So yes, Mayer has a revenue strategy. She's had one since January 2002, before the first version of News went public. She won't say what it is, but if News can build enough traffic, Google almost surely will seek advertising. It will probably resell the service to portals and other commercial sites, just as it does with its core Web search. (Every time you see the Google logo on a corporate site, the company is likely paying at least $25,000 a year for a Google server.) "But we're not in a hurry," Mayer says. "We're focused on making News a great experience. Until we figure out whether the product has traction, there's no rush to execute the revenue plan."

Could it be any simpler? Build great products, and see if people use them. If they do, then you have created value. And if you've truly done that, then you have a business. Says Mayer: "Our motto here is, There's no such thing as success-failure on the Net." In other words, if users win, then Google wins. Long live democracy.
Sidebar: Just how big is Google?

That's hard to say. Officially, Google says that it processes more than 150 million searches a day, but the true number is probably much higher. According to Nielsen/NetRatings, 67.6 million people worldwide visited Google an average of 6.2 times last December. Analysts guess that last year's revenue was between $60 million and $300 million.
Sidebar: A Gaggle of Google Games

While tens of millions of people like Google, a disconcertingly large minority are obsessed with it. Since 1999, techies have invested many hours and much creativity into devising a wide range of Google-based parlor games and curiosities. Here's a sampling, courtesy of Google and Cameron Marlow at MIT's Media Lab.

Googlewhack Find two words which, when combined in a Google query, deliver one and only one result. www.googlewhack.com claims that it has recorded 120,000 whacks since January 2002. Among recent entries to its "Whack Stack" are prevarication pileups and hiccupping flubber. (A Fast Company original: defamatory meerkats.)

Googlebomb Geek terrorism. Taking advantage of a Google loophole, Googlebombers gang up to mass-hyperlink a target page with a specific (usually derogatory) phrase. Google picks up on the links, even if the phrase isn't on the page itself. The legendary first, incited by Adam Mathes in April 2001, tagged Mathes's friend Andy Pressman's site with the words "talentless hack." For a while, it stuck.

Googleshare The invention of blogger Steven Berlin Johnson. Search Google for one word. Then search those results for the name of a person. Divide the number of results delivered for your second search by those from the first to get that person's "semantic mindshare" of the word.

Googlism Type in your name, someone else's name, or a date, place, or thing at www.googlism.com. The application, written by a team at Domain Active in Australia, uses Google to deliver Web-based definitions of your phrase. Bill Gates, for example, is "the anti-Christ," "a thief," "a hero," and "a wanker."

Google Smackdown Two queries. One search engine. A "terabyte tug-of-war," as its creator, Paul Bausch, calls it. Just plug in two competing words or phrases at www.onfocus.com/googlesmack/down.asp, and see which delivers more Google results. (Google, with 17.5 million, suffers a rare embarrassment at the hands of God, with 42.6 million.)
Sidebar: How does Google keep innovating?

One big factor is the company's willingness to fail. Google engineers are free to experiment with new features and new services and free to do so in public. The company frequently posts early versions of new features on the site and waits for its users to react. "We can't predict exactly what will happen," says senior engineer Nelson Minar.

viksoe.dk - GMail Drive shell extension:

GMail Drive shell extension

GMail Drive shell extension

GMail Drive is a Shell Namespace Extension that creates a virtual filesystem around your Google Gmail account, allowing you to use Gmail as a storage medium.

GMail Drive creates a virtual filesystem on top of your Google Gmail account and enables you to save and retrieve files stored on your Gmail account directly from inside Windows Explorer. GMail Drive literally adds a new drive to your computer under the My Computer folder, where you can create new folders, copy and drag'n'drop files to.

Ever since Google started to offer users a Gmail e-mail account, which includes storage space of 2000 megabytes, you have had plenty of storage space but not a lot to fill it up with. With GMail Drive you can easily copy files to your Gmail account and retrieve them again.
When you create a new file using GMail Drive, it generates an e-mail and posts it to your account. The e-mail appears in your normal Inbox folder, and the file is attached as an e-mail attachment. GMail Drive periodically checks your mail account (using the Gmail search function) to see if new files have arrived and to rebuild the directory structures. But basically GMail Drive acts as any other hard-drive installed on your computer.
You can copy files to and from the GMail Drive folder simply by using drag'n'drop like you're used to with the normal Explorer folders.

Because the Gmail files will clutter up your Inbox folder, you may wish to create a filter in Gmail to automatically move the files (prefixed with the GMAILFS letters in the subject) to your archived mail folder.

Please note that GMail Drive is still an experimental tool. There's still a number of limitations of the file-system (such as total filename size must be less than 40 characters). Since the tool hooks up with the free Gmail Service provided by Google, changes in the Gmail system may break the tool's ability to function. I cannot guarantee that files stored in this manner will be accessible in the future.

14 Jan. update: While Google keeps improving their Gmail service, they also tend to break the tool's ability to connect to Gmail. Version 1.0.5 was released to overcome the latest changes. Please be aware that support for this tool may suspend at any time if Google decides to block its use.
17 Sep update: Google restructured the Gmail login procedures again and previous versions of the tool fail to log in. The new version also adds the ability to double-click to launch files and FileOpen dialog support.
4 Dec update: Several improvements: Security warning on unsafe files, better XP-look / drag-images, new graphics by Jay Hilwig, better error reporting, Win64 bit support, fewer refreshes.

Installation Requirements

Internet Explorer 5 or better

Installation Guide

  • Extract the ZIP file to a temporary folder.
  • Run the Setup application.

Kirkville - Why Use the Command Line in Mac OS X?: "I appreciate your reasons and would like to add one. After I execute a series
of commands that, taken together, define a task, I add the following:

history > cmds.yyyymmdd

where the yyyymmdd part is the current date. When I want to repeat a
cumbersome task, I execute the command

find ~/ -name 'cmds.*'

to retrieve a list of such files. Usually, the directory and date tell me what
task I was doing, so I can tell at once which cmds.* file will refresh my
memory about how to do a similar task.
"

Parts Hardware Compatibility List - OSx86: "Note: Hardware that is confirmed to be compatible with 10.4.3 should be on the HCL 10.4.3 page.

ONLY add hardware which you have TESTED. Don't add hardware just because you think it might work.
Please format all comments nicely; don't inject your results into paragraphs.
This is a component level hardware compatibility list. We want to keep it as accurate as possible but please do not entirely rely on this list when buying hardware. Add which parts you have working in the categories listed below, or create a new category. Please keep alphabetical order inside the categories. The categories itself are ordered by popularity.
Please list any caveats or problems as well."

os x shortcuts

alt-tab -- switch between applications, same in windows

alt-` (alt tab window for same application)

Porting Mac OS X to Intel

Porting Mac OS X to Intel: "How hard can it be? The actual operating system will be a piece of cake; it's all those applications and device drivers that will prove troublesome.


13 comments posted
Add your opinion


In the hubbub over Apple's anticipated move from IBM to Intel chips, much has been said about how difficult the move would be. But really, how hard can it be to port Mac OS X to the Intel platform when its core operating system already runs on x86 chips?

Mac OS X's foundation is the open-source Darwin operating system. This, in turn, is built on the Mach 3.0 kernel. And, underneath Mach, you'll find the BSD 4.4 (Berkeley Software Distribution) Unix. In particular, Darwin owes a debt of gratitude to the FreeBSD distribution.

While Apple's own Darwin crew have focused primarily on the PowerPC platform, they've already done some work with Darwin on Intel. In addition, some open-source programs, like the Apache Web server and Sendmail, are available on Darwin.

'Much of Darwin is processor-independent BSD code,' according to the Darwin developer Web site."

Full Text Bug Listing

Full Text Bug Listing: "Make sure to deal with all 3 kernel extension caches When installing extensions: (a) create (or delete) the Extensions.kextcache, as well as the Extensions.mkext, and (b) delete the com.apple.kernelcaches That way, we deal with all three places that kernel extensions are cached."

Friday, December 30, 2005

The Unofficial Apple Weblog (TUAW): "11. I'd like to also mention, for newbies, a key-command I use all the frickin' time: Command-tab, which switches between open applications. Holding this combo and pressing tab again will take you to the next open app in the list. Command-~(tilde) will toggle through in reverse order. Also, to hayssam: I use Butler's fast user switching and key-command functionality to perform switching to the login window. I don't know of a way to key-command a switch to a particular user, but I switch out to the login window all the time and Butler works nicely for this. If you don't feel like figuring Butler out (it does a whole lot more than Fast User Switching), there's a little app called WinSwitch that does FUS, and only FUS. Both are donationware. Here are links: WinSwitch: http://wincent.com/a/products/winswitch/ Butler: http://www.petermaurer.de/nasi.php?section=butler&layout=default -systemsboy"

The Unofficial Apple Weblog (TUAW)

The Unofficial Apple Weblog (TUAW):

Top X keyboard shortcuts in OS X

Modifier keysIt's a slow weekend here at TUAW, so I figured I'd post a tip on keyboard shortcuts I've been meaning to get to for a little while here. As I've mentioned in previous posts, I'm a nut for keyboard shortcuts. They're a proven way to get work done faster, which means I get to cut back on buying Advil in bulk. So what better way to post handy, time-saving keyboard shortcuts than with a Top X list?

I searched through our archives while putting this list together to try and find shortcuts that either haven't been mentioned before, or they're fundamental favorites that everyone could use a reminder on. While some of these shortcuts might work in various applications, I'm specifically targeting OS X key commands here. Last but not least: I'm also trying to list shortcuts everyone can enjoy, from the elite OS X ninja to those who are reading this on their first Mac which they pulled out of the box just yesterday. So without further adieu, here are my Top X keyboard shortcuts for OS X, in no particular order:

  1. cmd + k - Transmit is my favorite FTP app, but for quick and easy FTP stuff, cmd + k is OS X's built-in "Connect to Server" command, found under the Go menu in the Finder. Not nearly as feature-packed as most apps, but it's fine for any basic work.
  2. cmd + opt + i - Most of us know about cmd + i, which is the Get Info command, but if you throw opt into the mix you now have a window widely known (yet undocumented) as "Super Get Info." This handy window is basically a live Get Info window, changing with each file and folder you click on, enabling you to view and alter many file and folder stats (such as Spotlight Comments and what apps open what files) with one single window.
  3. cmd + opt + h - Hide Others. Cmd + h is great for hiding the app you're in, but Hide Others does just what it says - it hides every other app you aren't in. Great for cleaning up a cluttered view.
  4. cmd + shift + 3/4 - the infamous Screen Capture keys. Using 3 allow you to capture the entire screen to a pdf (Panther) or a png (Tiger) on your desktop, while using 4 will give you an all-too-handy aimer to drag out an exact capture area. For bonus points: after the cmd + shift + 4 combo is triggered, you can then hit space bar for the option of capturing whatever window the mouse is hovered over. No dragging required.
  5. cmd + w - yes I know this one's pretty obvious to some, but it's a great shortcut for new OS X users, and a fundamental shortcut across all of OS X and the apps that run on it. Nearly every application, not just Finder windows, obey the cmd + w command, making it easy to get almost any window out of your way quickly.
  6. This one's a three-punch combo: 1) cmd + opt + eject, 2) cmd + ctrl + eject, and 3) cmd + opt + ctrl + eject. What do these weird and undocumented shortcuts do, you ask? Well, in order, they sleep, restart and shutdown your Mac of course. Each of those combinations will force their respective function, unless you have open files that have yet to be saved.
  7. cmd + opt + d - show/hide the dock. A great way to free up some extra room in that screen real estate-hungry app you're running.
  8. cmd + [ and ] - forward and back in not only the Finder, but Safari and now Firefox as well. I'm sure there are more apps that obey this, as it's a handy way to move through a lot of web research or folder digging.
  9. cmd + shift + ? - yes, another basic one, but even you OS X ninjas must admit to cracking a help file or two every now and then. This is another handy shortcut as it's universal among OS X and most of its apps.
  10. cmd + opt + esc - not to be left out, I had to mention the last-resort shortcut for misbehaving applications. For new OS X users, this is a shortcut for the Force Quit menu, a sibling to ctrl - alt - delete. For the few times I need it, this is a handy shortcut as it's obtainable with only one hand.
So there you have it. I hope at least a few of these can bring some joy to your workflow. Feel free to discuss and add your own shortcuts in the comments, just make sure they meet the requirement of working in OS X.

HCLPart - OSx86: "# C-media AC97 9880 series (featured on some Gigabyte boards) - sound works but stutters under iTunes (SSE3 P4 3.6GHz), volume control doesn't work, only one output
# C-media AC97 9880 included in ASUS P5GDC-V Deluxe motherboard fully works but no SPDIF (line in and mic in do not work for me!!!!!)
# Realtek AC'97 Audio for VIA a.k.a VT8235 (Works - left channel only, if you have pitched sound set the Output to 48 KHz). <- Work greats, thanks to whoever found the trick. To set 48Khz, Application -> Utilities -> Audio MIDI Setup. Change from 44.1KHz to 48KHz. ALAS!"

Re: Boot hangs with kbd unplugged: "Hi Heinrich, > I am using etherboot 3.2 and mknbi-linux 0.7.1(netboot). Works fine! > However, if i want to operate a Linux PC without keyboard (printserver), > the boot process hangs after the message > > Linux Net Boot Image Loader This was a known error in prior releases of etherboot; it should be fixed in release 3.2. You are experiencing problems with a buggy keyboard controller. Most machines work just fine without keyboards, but are stuck with a defective motherboard. The problem occurs when etherboot (or Linux) tries to enable the gate A20, which is neccessary to access any memory location above 1MB. This is done by programming the keyboard controller. Ideally, it should be independant of whether a keyboard is attached, but with your machine the controller does not acknowledge the operation if there is no keyboard. You could try playing around with your BIOS configurations. If there is any mention of the word 'keyboard' or 'gate A20', then change these options. > As soon as i plug in the keyboard, the process continues with > > Uncompressing Linux...done > Now booting the kernel This almost looks as if it was Linux, that is waiting for the keyboard controller. You might need to patch the Linux kernel. Take a look at the end of 'src/misc.c' in the etherboot 3.2 source tree. There is a timeout of 1s in the empty_8042() code. You will probably have to port this code into the appropriate code in the Linux starter. Check around line 640 in /usr/src/linux/arch/i386/boot/setup.S; after applying this patch, you will also have to make certain, that the gate A20 is enabled by some other means. If you are lucky, this can be done from your system BIOS. There might be an option labled 'fast gate A20' that could have some effect... > Is the Boot Image Loader waiting for any keyboard input? > If so, can this be disabled? If none of the above helps, you will have to buy some cheap 20,-DM keyboard that you keep attached to your machine. "

butingting sa bios, nagresulta ng hang ng pc

subukan palitan yung A20 Fast to normal, maghahang yung pc, or matagal mag-boot. buti n lng marunong mag-reset ng cmos yung bro ko


history note:

A20 - a pain from the past: "A20 - a pain from the past
Everybody hates the CapsLock key, but keyboard manufacturers continue producing keyboards with CapsLock - it could be that someone wants it.

With A20 it is similar but worse. Really nobody wants it, but it continues to haunt us.

History
The 8088 in the original PC had only 20 address lines, good for 1 MB. The maximum address FFFF:FFFF addresses 0x10ffef, and this would silently wrap to 0x0ffef. When the 286 (with 24 address lines) was introduced, it had a real mode that was intended to be 100% compatible with the 8088. However, it failed to do this address truncation (a bug), and people found that there existed programs that actually depended on this truncation. Trying to achieve perfect compatibility, IBM invented a switch to enable/disable the 0x100000 address bit. Since the 8042 keyboard controller happened to have a spare pin, that was used to control the AND gate that disables this address bit. The signal is called A20, and if it is zero, bit 20 of all addresses is cleared."

Google Groups : comp.os.linux.hardware

Google Groups : comp.os.linux.hardware: "Jean-Pierre Moreau wrote:
> > . . .
> > I get disk read around 1.5 MB/s, when nominal speed for UDMA 100
> > is 100 MB/s, as it should be from output of commands below.

> Just so you know, 100MB/s is the maximum transfer rate of the UDMA 100
> interface, not of the drive itself. You'll probably get something more like
> 30MB/s with that drive (The trick is that if you attach multiple drives,
> they're basically sharing that 100MB/s, so the maximum transfer rate of the
> interface is almost always going to be greater than the maximum transfer rate
> of any one drive). "

Google Groups : linux.kernel

Google Groups : linux.kernel:

Hi all,

I'm not subscribed to the list so please CC me on replies/thread this
might produce, thanks.

I recently bought a new PC with ATI IXP chipset for sound, video, ide.
sound is working correctly but IDE was really slow (no dma) until I
recompiled my Fedora core 3 kernel, changing include/linux/pci_ids.h
to have my chipset recognized as a ati_ixp one.

here is the result of lspci on the computer:

00:00.0 Host bridge: ATI Technologies Inc: Unknown device 7833
00:01.0 PCI bridge: ATI Technologies Inc: Unknown device 7838
00:13.0 USB Controller: ATI Technologies Inc: Unknown device 4367 (rev 01)
00:13.1 USB Controller: ATI Technologies Inc: Unknown device 4368 (rev 01)
00:13.2 USB Controller: ATI Technologies Inc: Unknown device 4365 (rev 01)
00:14.0 SMBus: ATI Technologies Inc: Unknown device 4363 (rev 03)
00:14.1 IDE interface: ATI Technologies Inc: Unknown device 4369 (rev 01)
00:14.3 ISA bridge: ATI Technologies Inc: Unknown device 436c (rev 01)
00:14.4 PCI bridge: ATI Technologies Inc: Unknown device 4362 (rev 01)
00:14.5 Multimedia audio controller: ATI Technologies Inc: Unknown
device 4361 (rev 03)
01:05.0 VGA compatible controller: ATI Technologies Inc Radeon 9100 PRO IGP
02:06.0 Communication controller: Conexant HSF 56k HSFi Modem (rev 01)
02:0b.0 Ethernet controller: Realtek Semiconductor Co., Ltd.
RTL-8139/8139C/8139C+ (rev 10)

To have the chipset recognized, I just changed the line:
/* ATI IXP Chipset */
#define PCI_DEVICE_ID_ATI_IXP_IDE 0x4349

with this one:
#define PCI_DEVICE_ID_ATI_IXP_IDE 0x4369

after that, dmesg says kernel recognized the chip and DMA is working,
hdparm -t changed from 8MB/s to 47MB/s, nice !

messaged during boot (dmesg):
Uniform Multi-Platform E-IDE driver Revision: 7.00alpha2
ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx
ATIIXP: IDE controller at PCI slot 0000:00:14.1
ACPI: PCI interrupt 0000:00:14.1[A] -> GSI 10 (level, low) -> IRQ 10
ATIIXP: chipset revision 1
ATIIXP: not 100% native mode: will probe irqs later
ide0: BM-DMA at 0xf000-0xf007, BIOS settings: hda:DMA, hdb:pio
ide1: BM-DMA at 0xf008-0xf00f, BIOS settings: hdc:DMA, hdd:DMA
Probing IDE interface ide0...
hda: ST380011A, ATA DISK drive
Using cfq io scheduler
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
Probing IDE interface ide1...
hdc: HL-DT-ST DVDRAM GSA-4082B, ATAPI CD/DVD-ROM drive
hdd: DVD-ROM BDV316E, ATAPI CD/DVD-ROM drive
ide1 at 0x170-0x177,0x376 on irq 15

I guess this could be taken care of in a future release of the kernel,
or maybe you need more documentation from Ati on what have changed
between these 2 releases of the chip ?

I am volunteering to test the patches that might be produced to make
this hardware function correctly, if needed ...

Regards,
Pascal Lengard

Google Groups : linux.gentoo.user: "hdparm -tT /dev/hda

(or whatever drive you are concerned about.) Greater than 15MB/S is
almost certainly DMA but good DMA from newer drives should be
25-50MB/S

You can look at the drives parameters using hdparm and reading through
the man page to understand what all the values mean.

Hope this helps,
Mark "

found out this interesting things

two partitions can't be active. otherwise os will not boot. you can toggle boot flag of hd using ubuntu live cd or bootable dos

if the boot sector of windows is overwritten you can use fixboot. start windows installer, then select Recovery options. on the command prompt type fixboot. also use fixmbr, or if you boot in bootable dos, fdisk /mbr

Mac on Intel :: Your information resource for Apple's transition to Intel: "Intel demonstrated actual running silicon of all three new CPUs, stressing the increase in performance per watt that these units bring to the consumer. The Conroe CPU, as an example, features a 5x increase in performance per watt over its predecessor. Intel estimates that $1 billion per year will be saved in electrical costs alone for every 100 million computers utilizing these new power-efficient CPUs. "

Direct hosting of SMB over TCP/IP

Direct hosting of SMB over TCP/IP:

SUMMARY

Windows supports file and printer sharing traffic by using the Server Message Block (SMB) protocol directly hosted on TCP. This differs from earlier operating systems, in which SMB traffic requires the NetBIOS over TCP (NBT) protocol to work on a TCP/IP transport. Removing the NetBIOS transport has several advantages, including:
Simplifying the transport of SMB traffic.
Removing WINS and NetBIOS broadcast as a means of name resolution.
Standardizing name resolution on DNS for file and printer sharing.
If both the direct hosted and NBT interfaces are enabled, both methods are tried at the same time and the first to respond is used. This allows Windows to function properly with operating systems that do not support direct hosting of SMB traffic.

MORE INFORMATION

NetBIOS over TCP traditionally uses the following ports:
   nbname            137/UDP
nbname 137/TCP
nbdatagram 138/UDP
nbsession 139/TCP
Direct hosted "NetBIOS-less" SMB traffic uses port 445 (TCP and UDP). In this situation, a four-byte header precedes the SMB traffic. The first byte of this header is always 0x00, and the next three bytes are the length of the remaining data.

Use the following steps to disable NetBIOS over TCP/IP; this procedure forces all SMB traffic to be direct hosted. Take care in implementing this setting because it causes the Windows-based computer to be unable to communicate with earlier operating systems using SMB traffic:
1.Click Start, point to Settings, and then click Network and Dial-up Connection.
2.Right-click Local Area Connection, and then click Properties.
3.Click Internet Protocol (TCP/IP), and then click Properties.
4.Click Advanced.
5.Click the WINS tab, and then click Disable NetBIOS over TCP/IP.
You can also disable NetBIOS over TCP/IP by using a DHCP server with Microsoft vendor-specific option code 1, ("Disable NetBIOS over TCP/IP"). Setting this option to a value of 2 disables NBT. For more information about using this method, refer to the DHCP Server Help file in Windows.

To determine if NetBIOS over TCP/IP is enabled on a Windows-based computer, issue a net config redirector or net config server command at a command prompt. The output shows bindings for the NetbiosSmb device (which is the "NetBIOS-less" transport) and for the NetBT_Tcpip device (which is the NetBIOS over TCP transport). For example, the following sample output shows both the direct hosted and the NBT transport bound to the adapter:
   Workstation active on
NetbiosSmb (000000000000)
NetBT_Tcpip_{610E2A3A-16C7-4E66-A11D-A483A5468C10} (02004C4F4F50)
NetBT_Tcpip_{CAF8956D-99FB-46E3-B04B-D4BB1AE93982} (009027CED4C2)
NetBT_Tcpip is bound to each adapter individually; an instance of NetBT_Tcpip is shown for each network adapter that it is bound to. NetbiosSmb is a global device, and is not bound on a per-adapter basis. This means that direct-hosted SMB's cannot be disabled in Windows without disabling File and Printer Sharing for Microsoft Networks completely.

APPLIES TO

OSx86 Project Forum > The Official Dual Booting Thread: "the simple way that worked for me: (not know if it's the easiest though)
XP already installed, made free space after c: for OSX, after the install of 10.4.3 1111, went to Pref - Startup and select OSX, edited /Library/Preferences/SystemConfiguration/com.apple.Boot.plist
and added these lines
way to do it:
1. open teminal and type:
sudo -s
nano /Library/Preferences/SystemConfiguration/com.apple.Boot.plist
2.add
Quiet Boot
No
Timeout
10

quit and restart
normaly you shoul have 10s to press F8 and then you got your Darwin bootloader smile.gif"

mukhang mapapagana ko rin yung os x 10.4.3 s isa ko pc

ayaw gumana kapag cable select. kailangan i-master yung hd thru jumper, and nasa southbridge(color blue ide slot). yung dvd boot drive dapat master din, pag independet slave(hindi nakakabit sa primary master) ayaw gumana, pwede slave yung dvd boot drive kung nakakabit s primary master

HELP ( AMD Athlon X2 3800+ & MSI ) - OSx86 Project Forum: "Thanks I've found a solution

Now it works --> The problem was that my disk was set in CS mode ed not master.

Now I've a new problem OS X is installed and perfectly works, but the installation wad dured 11Hs

why ? NO DMA ?"

Thursday, December 29, 2005

Did 10.4.3 Mess Up Front Row? at Forever Geek: "Did 10.4.3 Mess Up Front Row?
Category: Geek_Articles | AlexTan

Yesterday, Apple upgraded Mac OS X to version 10.4.3. The update broke the script that opened Front Row and clicked the Escape button for you automatically. Whether or not this was done on purpose by Apple, I don't know, but it doesn't matter because I have found a solution.

This is the solution:

1. Go into /Applications/AppleScript/ and open up Script Editor.
2. In Script Editor, copy and paste this code:

tell application 'System Events'
tell application 'Front Row' to activate
key code 53 using {command down}
delay 0.0
key code 53
end tell
3. Press File, Save As, and name it to Launch Front Row, in your Applications folder overwriting the previous script, and change the file format to Application. Press save.
4. If you followed my previous instructions then you should be good to go."

10-4-3-8F1111 - http://www.win2osx.net: "Convert DMG to ISO and mount as read/write in OS X

1. Open the terminal and type sudo -s, then login.

2. Convert the dmg to an iso by typing:

hdiutil convert /location/of/file.dmg -format UDTO -o /location/of/file.iso

3. Now we will mount the ISO file as a writeable DVD, so that we can copy over the patches. Do this by typing:

hdiutil attach -readwrite /location/of/file.iso"

Windows File Sharing: "Differences between SMB and NFS:



* The original SMB didn’t use IP for networking, but used NetBIOS. Therefore it could not be routed over the internet.
* NFS is defined by an RCF. CIFS is defined by the actions of Microsoft code. CIFS does printing, SMB does not.
* User verification is very different.
* CIFS uses the windows permission scheme, NFS uses the UNIX one.
* CIFS uses passwords, NFS verifies by IP number and trusts clients from allowed IP numbers.
* CIFS exports ‘shares’ and NFS exports directory subtrees.
* CIFS can change at Microsoft’s whim as long as they maintain backwards compatibility. NFS is more difficult to change, but is now on version 4.
* CIFS is case insensitive. NFS cares about case.
* CIFS uses TCP or NetBIOS. NFS uses UDP or TCP.
* CIFS has locking, NFS has a locking add-on.
* CIFS can compress data before sending it."

Wednesday, December 28, 2005

Slashdot | Intel Mac OS X Catches Up With Older Brother: "(Score:5, Funny)
by Doc Ruby (173196) on Thursday November 03, @03:22PM (#13944283)
(http://slashdot.org/~Doc%20Ruby/journal | Last Journal: Thursday March 31, @01:48PM)
Leaked install DVD? HAH! That's for scriptkiddies. Where's the leaked kernel source code?

* Re:AppleCore by (startx) (Score:2) Thursday November 03, @03:34PM
*
Re:AppleCore
(Score:5, Funny)
by iphayd (170761) <.moc.tnatlusnoc-cam. .ta. .noj.> on Thursday November 03, @05:04PM (#13945353)
(http://www.mac-consultant.com/ | Last Journal: Tuesday April 02, @08:30PM)
http://developer.apple.com/darwin/ [apple.com]

Now that I gave you that, you have to find the source code to the Application and GUI layers."

Slashdot | Intel Mac OS X Catches Up With Older Brother: "Current Macs run 2-10X more expensive than comparable PCs.

What? No. Macs are typically 1.1-1.5X as expensive as comparable PCs. And that's if you're just comparing technical specifications; if you start looking at really comparable PCs, with similar high-quality, well-designed and nice-looking cases and peripherals, then the Macs are pretty competitive.

What tends to make people think the gap is larger than it is is the large number of very low-end, very inexpensive PCs on the market. Apple doesn't really make any systems that compete with them."

Parodies of Stupid Email Disclaimers: "IMPORTANT: This email is intended for the use of the individual addressee(s) named above and may contain information that is confidential, privileged or unsuitable for overly sensitive persons with low self-esteem, no sense of humour or irrational religious beliefs. If you are not the intended recipient, any dissemination, distribution or copying of this email is not authorised (either explicitly or implicitly) and constitutes an irritating social faux pas. Unless the word absquatulation has been used in its correct context somewhere other than in this warning, it does not have any legal or grammatical use and may be ignored. No animals were harmed in the transmission of this email, although the yorkshire terrier next door is living on borrowed time, let me tell you. Those of you with an overwhelming fear of the unknown will be gratified to learn that there is no hidden message revealed by reading this warning backwards, so just ignore that Alert Notice from Microsoft: However, by pouring a complete circle of salt around yourself and your computer you can ensure that no harm befalls you and your pets. If you have received this email in error, please add some nutmeg and egg whites and place it in a warm oven for 40 minutes. Whisk briefly and let it stand for 2 hours before icing. "

TortoiseCVS: FAQ: "Why do the overlay icons sometimes change to random graphics?

The Windows icon cache is a fairly buggy creature. You can solve this in one of the following ways:

* Rebuild the icon cache by using the Rebuild Icons command on the CVS submenu.
* Or increase the icon cache size. Go to HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Explorer and add a new String Value called 'Max Cached Icons'. The default value is 500 - try increasing it to 2048 (see http://support.microsoft.com/support/kb/articles/Q132/6/68.asp for more details).
* Or delete the file called ShellIconCache in your Windows directory. And reboot."

Visual SourceSafe: Microsoft's Source Destruction System:

Visual SourceSafe: Microsoft's Source Destruction System

by Alan De Smet

There are many fine solutions for revision control systems. SourceSafe isn't one of them.

I used SourceSafe for five years though spring 2002 . It has consistently been an unpleasant experience. New versions failed to improve anything of import. I hope to dissuade you from using SourceSafe, sparing you the bad experiences I have had.
Missing Features
SourceSafe lacks usable branching support

A revision control system should provide powerful branching support. With strong branching support, developers can easily make minor revisions of old versions while work toward the next major release continues. Highly experimental code can be checked into a branch, keeping it separate from mainstream development but backing it up and making it available to other developers. If the project is "frozen" while a milestone or final release is built, a developer can continue development toward the next version on a branch. (Or more commonly, a new branch can be created for the freeze while general development continues on the main branch. When the release is done, changes on the frozen branch can be merged back into the main branch.) SourceSafe's branching support fails to effectively support any of this.

With powerful branching, a revision control system must also provide strong merging support to reconcile different branches. At the least, the system must allow a developer to examine the differences between two branches, modify them to create a merged version, and when satisfied check them in. SourceSafe's merge support is tightly integrated with checking in, making it difficult to examine differences and test the proposed merge before checking it into the tree. With this weak level of support, it's easy to check non-functioning code into the revision control system.
SourceSafe cannot be safely extended

It should be possible to easily extend your revision control system with additional functionality. The ability to send out emails summarizing check-ins is essential. When working with a team, regular email messages listing files checked in and the check in messages associated with them really help keep everyone up to date with recent changes. You might also want to add filters to prevent check-ins of code that doesn't meet certain requirements (standard copyright statements or doesn't compile). SourceSafe barely supports this. While is possible, every single client needs to have the additional functionality installed. If a single client lacks the extension, it will quietly fail to behave as expected. (For details, see Visual SourceSafe 6.0 Automation. Check the section "Trapping SourceSafe Events?An Overview".) You can pay even more for a third party solution, but does it make sense to invest more money in a fundamentally broken product?
SourceSafe silently leaves stale files on your local system

When updating your local workspace to match the server, files which were deleted on the server should brought to your attention. (Or deleted, since the old version can be retrieved from the revision control system.) Failure to do so risks out of date files being used in your project, often causing problems. I've frequently run into this problem when an out of date header file is incorrectly included into my project. SourceSafe fails to delete the out of date file or provide any warning.
SourceSafe badly handles slow networks and the public internet

SourceSafe is unusable over slow network connections. It's effectively unusable over the public internet. In addition, because SourceSafe works over network shares, if you place a SourceSafe server on the internet, you're exposing any weaknesses in your servers file sharing implementation to the entire world. Of course, if you're willing to invest more money in your ineffective revision control system, you can buy a third party product to solve this problem.
Managing third party modules is difficult with SourceSafe

It's not uncommon for a developer to use third party modules in your project to quickly add required functionality. For example, you might use Codejock Software's Xtreme Toolkit. It's natural to check these third party modules into your revision control system. This way, when you step backward in time to examine a previous revision, you can get the same versions of supporting libraries and third party modules that were used to build your code at this time.

Unfortunately, SourceSafe makes tracking a third party module extremely difficult. Initially checking the first version in isn't hard. Checking a new version in requires a good memory and attention to detail. To add a new version, you first recursively check the folder holding the module out. Now delete the directory on disk and replace it with the new version. Check in new version in. You now need to identify any files or directories added in the new version. Right click on the module's folder in SourceSafe and use "Show difference" to recursively generate a list of files which have been added. Note which directories hold files which have been added and which directories have been added. Now close the report of differences (the report is modal, preventing you from using SourceSafe while visible). Add the new directories as you would normally add directories. To add the new files, visit each directory holding new files and use File > Add Files to add them. Again, use the "Show difference" command to recursively generate a list of files which have been removed. Note these files and close the the report of differences again. Now delete each of these files in SourceSafe.

If you've actually tweaked the third party module, SourceSafe provides no particular help in tracking down the differences and merging them into the new version.

(For comparison, to check in a new version of a third party module using CVS, you would simply run the command "cvs -q import -m 'Import of Xtreme Toolkit 1.9' xtremetoolkit Codejock XT_1_9". That's it. If you've made changes to the module that need to be integrated, you would use "cvs checkout -j XT_1_8 -j XT_1_9 xtremetoolkit". That will give you a local copy of the merged changes which you can immediately check in if satisfactory.))
Viewing and retrieving historical versions is extremely slow

It's not unusual to need to get a historical version of the source code. You might need an older version to investigate a bug report, or the current code is malfunctioning and you need to get a functioning version. SourceSafe supports this, but it's extremely slow for non-trivial projects. To get a historical version, you first need to generate a history for the entire project you're interested in. On a project with hundreds of files and just over one year of history, this can easily take over five minutes (even if you restrict the actual search to the last 48 hours of changes). Once this history is generated, you specify the version to get by selecting the last check-in to accept. The slow speed at which this process is completed discourages developers from examining previous versions, defeating much the purpose of a revision control system.
Difficult to maintain multiple local copies of one project

While making extensive changes to a copy of the project, you may be asked to make a small change to the project. The most efficient and safest way to do this is to get another copy of the project to make the change on. SourceSafe presents two problems in doing so. First, SourceSafe only recognizes a single copy of the project on your system. You'll need to either move the project directories back where SourceSafe expects the canonical copy, or you'll need to reset SourceSafe's notion of where the canonical copy exists. Using either technique, it's easy to accidentally point SourceSafe at the wrong project and check the wrong versions of files in. Secondly, SourceSafe's weak merging features mean that if you need to change the same file in both copies of the project, you'll need to be very careful that changes to one project don't destroy changes in the other.
Safety
SourceSafe degrades on large projects

Microsoft recommends that your database not exceed 5 GB. (Source: Microsoft Best Practices) While this is a large database, it's not unreasonable for a large project, especially if you check in large binary files (like Microsoft Word documents).
SourceSafe integration can crash Visual Studio

SourceSafe can hang or crash when your system loses connection to the SourceSafe database. While this is irritating for Visual SourceSafe, this can cause you to lose work when Visual Studio is using SourceSafe integration. Simple having a SourceSafe managed project open in Visual Studio is enough to open yourself to the risk. To minimize this risk (and speed up ClassView), I suggest you follow Microsoft's directions on disabling SourceSafe integration.
SourceSafe relies on dangerous file sharing

SourceSafe doesn't really run as a server, but as a set of files shared over SMB. As a result, you're relying on each individual client to not misbehave. A single misbehaving computer can destroy the database. A problem in the file sharing implementation on your operating system can damage the database. Users only needing read-only access to the revision control system need write access to the server, increasing the risk (Required Network Rights for the SourceSafe Directories).
SourceSafe should be scanned for corruption weekly

Of course, with this high risk of corruption, Microsoft recommends that you run the Analyze diagnostic program weekly. (Source: Microsoft Best Practices) While Analyze is running all of your developers are locked out of the system (I hope everyone remembered to quit from SourceSafe first!). My experience with SourceSafe shows that a 2 gigabyte system running under Windows 2000 takes several hours to check if run weekly.
SourceSafe handles multiple time zones badly

If you have teams using the same SourceSafe repository in different time zones, you're likely to have problems. (See Microsoft's details on the time zone bug.) The only solutions Microsoft provides are to incorrectly set the clocks of the computers to a single time zone, or to purchase a third party product.

Relatedly, this is a potential problem if any of the client computers using SourceSafe fail to have synchronized clocks. Differences of several minutes between computers can cause strange behavior from SourceSafe with it tries to reconcile information that appears to come from the future.
SourceSafe becomes corrupted

Your revision control system must be trustworthy. You're entrusting your hard work to your revision control system. If your data is corrupted, the system is worthless. SourceSafe's fundamental design assumes that clients are trustworthy, always function correctly, and that nothing interferes with the communication causing corrupted data. As a result, SourceSafe is fragile and untrustworthy. I have worked with SourceSafe at three different jobs. In each case, eventually the SourceSafe database became corrupted. Data has been corrupted, work has been lost, time has been wasted on the problem. Speaking with other developers, I have learned that my experiences are not unique.
Irritations

*

Minor actions like changing the directory erase the entire contents of the output window, making it difficult to examine past actions.
*

Comparing your local version to the remote repository is clumsy. You select the directory you're interested in SourceSafe and select Compare Differences. The resulting report is modal, preventing you from working with SourceSafe while examining the report.
*

When getting the latest version of files from SourceSafe, each file changed locally causes a dialog to pop up to confirm the update. The update action entirely stops while the dialog waits for your response. This is particularly irritating if you get the latest version, step away from your computer for a while, then return to discover that SourceSafe is only 10% done and waiting for your response. You can prevent the dialog from returning in several ways, but in doing so you get no indication that any such files were encountered. So when you return to the finished update, you will have no idea that SourceSafe encountered potential problems. SourceSafe should note these files in the output window when encountered, making it easy to scan the output window for files to be investigated.

Conclusion

If you're considering SourceSafe, consider something else. If you're using SourceSafe now, migrate away as soon as possible. There are any number of options. Here are just a few options to check out. Most, but not all, are cross-platform, but if you're already using or considering SourceSafe, cross-platform support probably isn't that important. Note that with the exception of CVS, I don't have personal experience with any of these, so evaluate them yourself.

* BitMover's BitKeeper - Linus Torvalds chose BitKeeper to manage Linux, one of the world's most widely distributed development efforts.
* Reliable Software's Code Co-Op - Was suggested to me by someone at the development company. The web site suggests that a key focus appears to be distributed work; individual nodes are large self-sufficient. Occasional synchronization can be done over a network or email. It appears to share many of the goals of BitKeeper.
* CVS - CVS has its problems, but it's powerful, stable, reliable, transparent, and free (both in price and freedom). It's my current pick. If you're using CVS under Windows definitely check out the excellent (if complex) front end WinCVS. For something a bit less complicated you might want to see TortoiseCVS; it exposes CVS as a Windows Explorer extension. If you're experimenting with CVS for the first time you'll find the book Open Source Development with CVS useful. Despite the name the book is useful to anyone using CVS. Even better, the key chapters on usage are available free online.
* GNU arch - One of the newcomers to be Free Software options.
* Perforce - I'm not very familiar with Perforce, but someone from Perforce very politely inquired about being added. High speed appears to be a key design element. Looking over their promotional material, I get the distinct sense that they're targetting ClearCase users unhappy with the speed.
* PureCM - A relatively new system that appears to tightly integrate a bug/issue/change tracker. It also appears to support distributed changes (being able to check things in without access to a cenrtal server). On of their developers kindly asked to be included, so here it is.
* Rational ClearCase - I've heard it's big, slow, requires a custom file system, and really demands a dedicated administrator. But I know developers who swear by it's power. I gather that it's truly industrial strength source control.
* SourceGear Vault - I don't know a lot about SourceGear Vault, but they were nice enough to ask about inclusion in the list, so here you go. SourceGear developed the SourceSafe extension SourceOffSite, so they're clearly familiar with the problems in SourceSafe. They set out to develop a superior replacement. They promise the ability to import your SourceSafe repository with all historical information, useful if you're migrating away from SourceSafe.
* Borland's StarTeam - Another product I'm not too familiar with, but they nicely asked about being included. It looks like they're trying to integrate with lots of other commonly used tools like Microsoft Project. They provide a free evaluation download, the bad news is that they want lots of information, the good news is that you get to download it immediately instead of waiting for a salesman to call.
* Subversion - Explicitly designed to replace CVS as the open source source control system, Subversion is still under heavy development but is stable enough for production use.
* TLIB Version Control - Another addition at the polite request of the publisher.
* Computers : Software : Configuration Management : Tools - This DMOZ Open Directory Project category collects information on many source control systems. If you're considering a source control system it's definitely a good place to start.

If you simply must use SourceSafe, definitely take the time to look at Microsoft's list of bugs in Visual SourceSafe 6.0 and list of fixed bugs in Visual SourceSafe 6.0 so you know what to expect. (These links were originally taken from Microsoft's Bugs page. This page may be useful if you have a different version of SourceSafe or the above links fail.)

Codejock Software: "Microsoft® Business Solutions - Solomon chooses Xtreme CommandBar for ActiveX COM to provide highly customizable, feature rich menus and toolbars. Solomon users can visit the Xtreme CommandBar product page for detailed product information."

change the current page using javascript

window.location.href

Using SMB on OS X:

Using SMB on OS X

Though programs like Sharity and Dave have allowed Mac users to access SMB shares for a while now, it wasn't until Mac OS X 10.1 that SMB capabilities were built right into the OS. There are two ways to mount an SMB share in OS X. One method is to use the Finder, and the other uses the mount command from within a Terminal window.
Using the Finder

To mount a share using the Finder, you will need at least Mac OS X 10.1. Previous versions of the OS do not contain the necessary features to support accessing SMB shares natively. The first step to mounting the share is to select the Go menu and then select Connect to Server. The Finder keyboard shortcut is Command + K. This will open the Connect to Server dialog box. In this box there is a field labeled Address. In this field you want to enter:

smb://workgroup;username@netbiosname/share

Then click the Connect button. If all goes well you will be presented with the SMB/CIFS Filesystem Authentication window. This window will list the workgroup, username, and server used in the previous window. What you will need to do now is enter the appropriate password and then click OK. When done correctly, you will now see an icon on your desktop that is labeled with the name of the share you just mounted. You should now be able to use the share just like any other drive on your system.
Using the Terminal and mount

The second method of mounting an SMB share in OS X is to delve into its UNIX roots and use the command-line interface. First, open up the Terminal. To do this, double-click on Macintosh HD, then on Applications, then Utilities, and finally, Terminal. This will open up a command line session on your Mac and present you with a prompt. To mount the share, you will first need to create a folder to attach the share to. To do this, use the mkdir command as follows:

% mkdir myshare

For you to mount the share, you need to be logged in as root. Once you've created the directory, su to the root user, then enter the following command:

# mount_smbfs -W myworkgroup //username@netbiosname/share ./myshare

This will mount the remote share as the myshare directory, which means that it will not appear on your Desktop, but you should be able to access it much like any other folder using the Finder.
Using the .nsmbrc File

Instead of having to re-enter your password, username, and workgroup every time, there's actually a shortcut available to you. You can create a file in your home directory called .nsmbrc (note the dot). This file has a simple format that I'll explain below and allows you to store the information to save you time. One thing to note is that you should use the chmod command to change the permissions of the file to 0600 to protect your passwords. My ~/.nsmbrc file is below:

[WINBOX:JLDERA:JLDERA]
addr=192.168.0.7
password=mypass
workgroup=ARTOFTECH

[WINBOX:JLDERA:MP3]
addr=192.168.0.7
password=mypass
workgroup=ARTOFTECH

This file is very straightforward. First I'll explain the line in brackets. These represent [netbiosname:username:share]. So in the first example, I'm logging onto the WINBOX server as user JLDERA and trying to mount the JLDERA share (my home folder on Winbox). Below that, the first line is the actual IP address of Winbox. This field is optional, the address can be derived from the NetBIOS name. The next line is my password, and then the appropriate workgroup. Now when I mount a share using the Finder, it won't prompt me for this additional information, I can just enter the server and share in the Connect to Server dialog and I'm all set.

I'm Joining the Majority by Putting the Mac Aside in 2005 (by Jeremy Zawodny): "I've had the same spotty experience getting IP addy's for personal machines. Frankly though, I don't want them. There are very serious privacy issues with those. However the VPN cures these issues, as does the on-campus wireless. I'm not sure but I believe the wireless will work with any system, not just one blessed by IT.

As for your Powerbook's performance, it's not a G5. This is why Apple is switching to Intel. I've been very happy with Firefox/Thunderbird, they're my main workaday apps."

I'm Joining the Majority by Putting the Mac Aside in 2005 (by Jeremy Zawodny): "A simple way of increasing the perceived speed of OS X, at least in one aspect:

Open a terminal, and type the following:

defaults write NSGlobalDomain NSWindowResizeTime .001

(Then hit enter, of course)

Applications started after you do this will have 'sheets' that snap open, instead of slowly oozing out.

The change does not require a restart, and it is persistent, so you need not type this in at every login. It's stored in your defaults database."

Everything Sysadmin: Sweet Jesus, I can't believe Apple didn't make this the default!: "Nope--this still doesn't fix this for me. It allows me to use the keyboard in *some* things but... go to a page where you have to log in using Apache authentication--try and tab to the 'remember this logon..' It still requires a mouse.. :-/
Posted by: ges at January 4, 2005 12:46 PM

It fixes it for all Apple applications. Private discussion with ges finds that he's talking about Firefox and other Mozilla projects. Looks like it's time to file a bug with them!
Posted by: njtom at January 4, 2005 01:22 PM

For firefix/mozilla, type 'about:config' in the Location bar, then change the option accessibility.tabfocus to '3'"

Everything Sysadmin: Sweet Jesus, I can't believe Apple didn't make this the default!: "Sweet Jesus, I can't believe Apple didn't make this the default!

Apple Menu -> System Preferences -> Keyboard & Mouse -> Keyboard Shortcuts
Then click on the bottom checkbox.

TAB now moves between checkboxes, buttons, and so on.

I can't believe that isn't the default. What were they thinking?

(This announcement was brought to you by Unix Wonks That Now Use Macintoshes)"

macosxhints - 10.4: An overview of NTFS support in Tiger: "
Here's the man page: (man mount_ntfs)

MOUNT_NTFS(8) BSD System Manager's Manual MOUNT_NTFS(8)

NAME
mount_ntfs -- mount an NTFS file system

SYNOPSIS
mount_ntfs [-a] [-s] [-u uid] [-g gid] [-m mask] special node

DESCRIPTION
The mount_ntfs utility attaches the NTFS file system residing on the
device special to the global file system namespace at the location indi-
cated by node. This command is normally executed by mount(8) at boot
time, but can be used by any user to mount an NTFS file system on any
directory that they own (provided, of course, that they have appropriate
access to the device that contains the file system).

The options are as follows:

-a Force behaviour to return MS-DOS 8.3 names also on readdir().

-s Make name lookup case sensitive.

-u uid Set the owner of the files in the file system to uid. The
default owner is the owner of the directory on which the file
system is being mounted.

-g gid Set the group of the files in the file system to gid. The
default group is the group of the directory on which the file
system is being mounted.

-m mask
Specify the maximum file permissions for files in the file sys-
tem.

FEATURES
NTFS file attributes are accessed in following way:

foo[[:ATTRTYPE]:ATTRNAME]

`ATTRTYPE' is one of the identifiers listed in $AttrDef file of volume.
Default is $DATA. `ATTRNAME' is an attribute name. Default is none.

EXAMPLES
To mount an ntfs volume located in /dev/ad1s1:

# mount_ntfs /dev/ad1s1 /mnt

To get the volume name (in Unicode):

# cat /mnt/\$Volume:\$VOLUME_NAME

To read directory raw data:

# cat /mnt/foodir:\$INDEX_ROOT:\$I30"

Monday, December 26, 2005

Technophilia Exemplified » Enabling Cleartype in Ubuntu Linux: "Enabling Cleartype in Ubuntu Linux
Blogged by Mindwarp as Computers, Linux — Mindwarp Mon 19 Sep 2005 5:08 pm

1. Add multiverse to your list of respitories
2. apt-get install msttcorefonts
3. Add DisplaySize 338 211 #1280x800 under your monitor section (google for other resolutions using DisplaySize 96dpi and your resolution)
4. sudo dpkg-reconfigure fontconfig and enable subpixel rendering
5. Enable subpixel rendering in the gnome-font-config
"

OSXFAQ - Technical News and Support for Mac OS X:

I Thought Mac OS X Was Supposed To Be The End Of Extensions Hell ??

by Dr. John Timmer, Contributing Editor
Question: I thought Mac OS X was supposed to be the end of extensions hell, but I still see a folder called Extensions in the System's Library Folder. What's the deal?

Short answer: Yes, they are extensions, but these are fundamentally different. They're extensions to the kernel itself, and are used in a way that should prevent conflicts.


Long Answer (with a few more questions thrown in):

Q: Just what is a Kernel extension?

We all know how troublesome extensions were in Mac OS X. By attempting to load code into the OS themselves, they often lead to conflicts and crashes. Apple promised to leave all that behind in Mac OS X, but if you dig around the System folder of Mac OS X, you'll still find an Extensions folder. What resides inside, however, is nothing like you've been familiar with. Hang on while I describe what they are, how they work, and how they affect the stability of Mac OS X.

Answer:

One of the kernel's jobs is to allow the software running on it to communicate with its underlying hardware. In many Unix based OS'es, the kernel would have to contain all the code necessary for this communication. Considering the variety of hardware a desktop OS will come in contact with, that could mean a lot of memory wasted. It also might not work well with USB and Firewire devices, which may appear and disappear at fairly random moments. In order to use memory more efficiently, Mac OS X (and several other modern Unixes) allows code such as device drivers to be loaded into the kernel dynamically.

Since Apple had to generate this mechanism for altering kernel behavior anyway, they also chose to use it for other forms of kernel communication, such as adding file system and networking capabilities. In short, anything that alters fundamental OS behavior or capabilities but doesn't affect the GUI is likely to require a kernel extension. Most of the kernel extensions you'll see, however, are designed for communications with hardware.

If kernel extensions are so useful and flexible, why aren't they used for a lot more? Unfortunately, just like the extensions of Mac OS X, they come with a price in terms of stability. Mac OS X's stability is largely due to its memory protection, a function that is provided by the kernel. The flipside is that, when code is loaded into the kernel itself, it's able to overstep the memory protection that the kernel provides. Thus, Apple advises developers to stay out of the kernel if at all possible. That means finding a way to provide system wide services without resorting to a kernel extension if at all possible.


Q: How do kernel extensions get loaded during booting? I don't see any rows of dancing icons...

If kernel extensions are required for all hardware communications, how does the kernel even get started without them?

Answer:

Some basic hardware information is acquired early in the boot process by the machine's firmware, but only enough to find a startup disk and locate the operating system on it. The firmware then hands over the booting process to a program called BootX, which (among other things) loads the kernel extensions. Most of its work is done before Mac OS X's Core Graphics code is loaded, though, so drawing dancing icons isn't an option.

BootX's extensions loading has two features that have been added in order to speed the loading process and, correspondingly, the boot time. The first is that the OS caches a lot of the kernel extension information in a single, structured file that can be loaded into the kernel quickly. All kernel extensions contain a property list that includes a flag for whether the hardware it supports is essential for booting or not. During the first boot, all those marked as essential are put into a cache file (it can be found in System:Library, called "Extensions.mkext"). The OS continues to use that file until its modified date is different from that of the Extensions folder itself; at that point, it reverts to loading them from the Extensions folder and then re-creates the Extensions.mkext file.

The second is that the OS loads first and asks questions later. In other words, BootX loads every extension without determining whether the hardware it supports is present or not (more on that below). No time wasted making decisions makes for fast booting.



Question: Isn't loading every kernel extension a bit inefficient? How does the kernel know which are actually needed?

Early in this article, I mentioned that having the kernel have all the code necessary for any possible hardware is a pretty good way to waste memory space. Then i turned around and said that the OS loads them all anyway. Believe it or not, both are true!

Answer: One of the first processes to be launched after the Kernel itself is the Kernel Extension Daemon or kextd, which handles their loading and unloading (if you check your machine now, you'll find it's still running). One of kextd's first tasks is to determine which extensions are actually needed. To do this, it starts doing what's called hardware matching. Each kernel extension contains information as to what sort of hardware it can support. The kextd checks this information against the actual hardware present; if a given kextd doesn't have any hardware that it supports, then it's unloaded, freeing up memory.

Hardware matching is actually a bit more complicated, though. Apple uses both ATI and NVidia video cards, and the extensions folder contains drivers for each - how does the OS know which one to use on a given machine? It turns out that there are several levels of matching that go on. On a generic level, a driver may support a class of hardware, such as a video card or a hard drive. It may also enable features for a more specific type of hardware, such as a firewire hard drive. Finally, the driver can support very specific features, such as those provided by drives using the Oxford Firewire bridges. The kextd tries to find this last, specific match first. If it fails to find that, it looks for something less specific, and finally defaults to a basic driver, such as one for all firewire disk devices. If a driver never winds up matching any of the existing hardware, or can only match with hardware that's already made a better match, it gets unloaded. Not only does this careful matching process save on memory used by the kernel, but it should eliminate the possibility of two extensions trying to perform the same function, thus taking care of extensions conflicts.



Question: If an unused kernel extension gets unloaded, how come something still manages to recognize a firewire drive when it's plugged in?

There's a related question that provides the answer to this one: why is kextd still running three weeks after the boot process is complete?

Answer: The best thing about kernel extensions is that they're dynamic - they can be loaded or unloaded at any time. kext hangs around waiting for new hardware to show up. If and when it does, it goes through the hardware matching process all over again, allowing things such as USB CD burners and Firewire hard drives to be recognized whenever they get plugged in.



To sum up, there are still extensions around in Mac OS X, but they are unlikely to be the cause of instability. That's because their role has been reduced to extending the capacity of the kernel to communicate with hardware and software devices such as graphics cards and network interfaces. In addition, a carefully designed matching system helps prevent two extensions from conflicting.