Personal Wikis and Link Autosuggestion

[Update Jan 2020: OneNote has a form of autocomplete and  low friction new page creation: just press CTRL+K and start typing to instantly shortlist pages and sections by title. This has bothered me for years and it was right there in the UserVoice page I had contributed to for years!]

I absolutely love wikis and have used them personally and professionally for years.

I was not surprised to learn recently that the US intelligence community uses them extensively 1. As does the UK’s GCHQ 2.

I think I started out with Wikidpad as my personal wiki before it was even open sourced. It was (and is) a phenomenal wiki. Windows native, but Python based so with some effort you can get it running on Linux and OS X too.

When I started using OS X both at work and personally, I moved my Wikidpad notes to nvAlt, another stupendous personal information manager that combined near instantaneous search with the ability to create a note right out of your search and super easy note linking with link autocompletion.

Confluence nvalt

A killer feature for me is the ability to get link suggestions/autocompletions as you type. Just Type [[ and start typing a name and if it exists you get a list of matching linked notes you can select and link to. Confluence, Wikipad, naval and SahrePoint Wiki all have this natively. You can get it in MediaWiki with plugins like LinkSuggest, but it only starts to suggest after the first three letters. This feature is missing from OneNote, although linking via [[ is supported.

Whilst I loved nvAlt for my personal wiki / notebook, I also wanted a public notebook or wiki.

I tended to find myself using one of two wikis for pubic wikis: MediaWiki or Confluence .

I had been using MediaWiki for several projects (e.g. the Belgrade Foreign Visitors Club wiki) and found it a phenomenally powerful platform, especially when you extend it with plugins like Semantic MediaWiki. I also greatly enjoyed Confluence. I used it for many years in a former company, where it was an indispensable tool for us. We used to for all our internal documentation, but also for external facing user documentation.

Confluence is hard to beat on features, especially the much loved link autocompletion feature.  It is a full on Enterprise wiki, but it comes at a price. Unlike MediaWiki, you need a dedicated VM / computer to run it. It is Java based and needs loads of memory to be performant. The license is dirt cheap for individuals and small teams ($10 for 10 users) but as soon as you exceed this you are paying big bucks for the software. You also need to be fairly technically proficient to operate a Confluence instance, but it is very well supported too.

These days I am mostly using OneNote for my notes and personal wiki. It is an absolutely superb piece of software that “just works” on every platform I uses (Windows, OSX, iOS, Windows Phone). I have filed a feature request (internally) with the OneNote team for them  to support link autocompletion. If you like the idea, please vote for it on  the OneNote team’s Uservoice.

I have been tempted to OneNote as a public wiki too. It is trivially easy to share a notebook with the public. The only problem is that the URLs are ugly and the notebook cannot be styled to look unique to you.

If I can find a way to easily shuttle my OneNotes to Confluence, I may have a winner. I can do all my composing in OneNote, then just publish to Confluence 3.

I am already considering doing this for blogging now that OneNote for Windows has a blogging feature now.

If you are looking for some resources to get started with your own wiki, here you go….

See also:

Transclusion – the inclusion of the content of a document into another document by reference. In Confluence, for example, you can mark up some text in one page and call that text into another page with a placeholder variable. This is super useful for avoiding duplication of content.

  1. “Structured analytic techniques for intelligence analysis by by Richards J. Heuer, Jr., and Randolph H. Pherson (2011)
  2. One of Edward Snowden’s leaks was a copy of the “Internal Wikipedia” used by GCHQ
  3. Plugin writers, I beseech you!

Kevin Kelly on design and the Scientific Method

[I noticed I had 36 posts in the drafts folder some dating back years. It can be quite fascinating to see what had your attention years ago. This one, last edited in March 2009, is just collection of notes for a post, but there were some gems from Kevin Kelly]

Totally engrossed in the subject of resources and pipeline management, information design, intermediate technology and dashboard design

“n-Dimentional gigantic hypercube of all the possible solutions to how to design the things and we are just wondering around trying to find the best one.” –  Stack Overflow podcast

How do committees invent?

In a discussion on Zen and The Art of Motorcycle Maintenance, Kevin Kelly made this observation:

Consider a parallel with software design:

* Statement of requirements
* [ architect/design
* [ implement/test
* deliver

That is, Scientific Method consists of a statement of the
problem, followed by a repetition of: generate hypotheses
and perform experiments to test hypotheses, followed by
From Pirsig’s description of Scientific Method:

* Statement of problem
* [ hypothesis
* [ experiment
* conclusion

a conclusion. Software design can be considered to be a
Statement of requirements, followed by a repetition of:
generate a proposed design then implement and test it;
followed by delivery of the final system.

Now, Pirsig goes into the fact that what seems like it
should be the hardest part–generating viable hypotheses–
in practice turns out to be the easiest. In fact, there’s
no end to them; the act of exploring one hypothesis brings
to mind a multitude of others. The harder you look, the
more you find. It is an open, not a closed, system.

I would suggest that this correspondence holds: that
the set of possible designs to meet the requirements is
infinite; that the act of generating a design brings to
mind multiple alternatives; that generating a design
increases, rather than decreases, the set of possible
alternative designs.

This is argument by analogy and therefore not particularly
forceful, but I feel certain, myself, that it holds. It
certainly feels right, intuitively. I think it ties in
with Goedel’s work on decidability: that any sufficiently
complex system–which any programming language is–is able
to say more than it can prove. Thus there’s always another
hypothesis that might give better answers; there’s always
another design that might solve the problem better. There’s
always room for an architect that can pull the magic out
of the clouds.

That last bit ties in to a point I’d like to expand on. That
is, that all formalisms, or design methodologies, are in
some way limiting. By adhering strictly to a particular
design process, you forego the gains that come from
inventing a new, better process.

Admittedly, you also ‘forego’ the time lost on ideas
that don’t work out.

Process or methodology is a means of getting a Ratchet Effect,
or Holding The Gains. It’s a way of applying
a pattern of development to other, related, projects.
There needs to be a way of allowing for new developments
and ideas, though.

“There’s no one more qualified to modify a system than
the last person to work on it”. That seems counter-
intuitive; one would think that the people that created
it understand it best. However, they’ve moved on to
other things, while the later maintainers got the
benefit of all the original designers’ work plus,
in addition, all that was later learned about the
system, such as how it reacts to the customers, and
how it responds to maintenance.

Software design is made up partly of flashing new insights,
and partly of routine solutions that have been invented over
and over again. Codifying patterns is a way of ratcheting
the whole community up to near the level of the leaders, at
least in terms of the routine solutions.

It’s still necessary to allow for the insights, though. A
lot of the big-company emphasis on process ignores this, assuming
that nothing is ever new, and that the answers of yesterday
are good enough for tomorrow.

(this is turning into a pretty good rant, but I think I’ll
cut it off for now)

— KevinKelley – http://clublet.com/why?ZenAndTheArtOfMotorcycleMaintenance

[Dec 2014: Sadly Clublet.com is not working, and archive.org has no archive of this page]

Clay Shirky on Culture Cones

culture_cones

Last month (January 2014) Clay Shirky gave a talk at Microsoft (50mins with Q&A). He took the opportunity to float some new ideas he has about Culture Cones, a metaphor he has borrowed from the physics concept of light cones.

He starts the description of the concept at 12m 45s into the talk.

Imagine two observers. The first is one light year from a supernova, the other is two light years away from the supernova.  If the supernova explodes with a flash, the event will “happen” one year later to the first observer and two years later to the second observer. One sees it a year before the other.

So it is with cultural events and memes. Culture cones move through networks like light cones through space.

Shirky asks, “When was the first time you heard about bitcoin?”, a culture cone moving though society right now.

Less connected people experience these events much later. They just saw the supernova flash no matter how long ago it actually happened. Technologists have this all the time when their family eventually ask them about some new thing that is actually old, “So what’s this Tor thing?”

It’s worth watching the talk. He even mentions Boyd’s and OODA loops.

Clay Shirky – Social Computing Symposium -16 January 2014