Wednesday, July 28, 2010

How to automatically schedule almost everything in advance

I've recently began thinking about how to be more productive. GTD (Getting Things Done) Methodology looks interesting, but I haven't delved much into it. I suppose one way to be more productive would be to auto-schedule actions in advance that could be automatically done on a specific future day and time. This saves you not only time, but also frees your mind to concentrate on other tasks.

Scheduling emails 


While some people might joke about autosending emails in the middle of the night to fool their bosses into thinking they are working in the middle of the night, this can actually be used for legitimate purposes.

For instance,  you might want to send an email to members of your team reminding them about a certain deadline but it might be too early to do so now. Why not autoschedule an email to be sent automatically on that date?

This feature is built-in in Outlook 2007

For other email clients or services see this

Somewhat related to scheduling emails out is to delay emails you receive such that it is resent later so you can see and deal with them again. One example is Layar. Somewhat different is Boomerang for Gmail.

Boomerang for Gmail



Scheduling tweets, facebook status updates


I'm responsible for updating my library's Twitter account. While I use Twitterfeed to autopost tweets from other sources, I often have to manually post one-off tweets. One typical example would be a tweet about say maintenance of  a web service. Typically I'm informed about this say a month in advance. A tweet at that point could and would be done, but to be more effective another tweet should be sent closer to the time or even during the maintenance period for very critical services.

Clients like Tweetdeck, or services like HootsuiteCoTweet and other services, allows you to autoschedule tweets in advance so you don't have to remember to.

Scheduling tweets using TweetDeck

Many of these services also work with Facebook statuses. As you can link tweets to other social media accounts like facebook, you can also autoschedule those indirectly by auto-tweeting something and letting it be pushed to your facebook account.

A very special use of autoscheduling tweets in advance is when you are presenting at a conference and you want tweets to be sent at the same time. You can use some of the services above, or you can use this, which autotweets as you change slides.




Recently I asked experimenting with Bufferapp. What it does is that you fill it with tweets and it then spreads out the tweets say every hour. Why would you do this?

Basically most people tend to read tweets in spurts. So for me for example between 7-8 , I am usually using flipboard to consume tweets and this is the time where I find the most number of retweet worthy news at the same time.

But rather than spam your follower's feed with like say 10 retweets in a short period of time, you can instead store it in Bufferapp which will spread out such tweets automatically (tweet times based on what you set earlier). You could of course manually schedule tweets using tweetdeck etc, but it is too much effort since you need to set times for each tweet.

Bufferapp tweets on a schedule set by you earlier which is much easier.



Scheduling blog posts


If you're like me there are days when you feel inspired to blog a lot but there are days where you suffer from mental block. Hence it's important to spread out your blog posts. 


If you are using Blogger , this feature is built-in . Wordpress.com? Ditto.  Using Posterous?  Blog using the post by email option, and send the email in the future using one of the methods mentioned above.


Scheduling blog posts in blogger



Scheduling reminders


This is not so much an action, as scheduling a reminder to do something or to remind of a appointment. This is a huge topic, with many possible methods, but the easiest is to use Google calender. Why? Because it allow you to receive SMS reminders (available almost worldwide).



Scheduling anything else

Other Computer related tasks can all be scheduled using task scheduler


Be careful


Don't go overboard in scheduling emails, tweets etc. I am somewhat wary of scheduling things too much in advance because circumstances might change and can be embarrassing if you forget.

E.g You autoschedule an email or tweet etc to be sent out closer to the date of the event, but a few days after doing so you have to change the event date, but you forgot about this autoscheduled email or tweet....  Some of these schedulers do have a list of scheduled events which help, but not all have that.

Saturday, July 17, 2010

More tools to help libraries do environment scanning

One of the keys themes of my blog is how to use free tools to get alerts of news of interest. More specifically, I have made a series of posts trying to figure out how to use tools to find and engage with library users who are online posting, tweeting etc about your library.

I first mentioned the idea in Dec 2009 , and followed it up with "Scanning mentions of the library - Twitter, Google alerts & more" which talked about how one could go beyond pure keyword searching to find relevant tweets. "Environment scanning for libraries - Facebook" , talked about extending this to public facebook status updates.

And finally most recently with the kind permission of my library superiors, in "Why libraries should proactively scan Twitter & the web for feedback - some examples" , I gave concrete examples of how such techniques can delight users by providing service recovery to unhappy users who would otherwise have being missed. I recommend you read this first if you haven't yet.

As usual, I'm still experimenting with this. at the risk of quoting myself

"In an earlier post , I talked about using Twitter, Google and other tools to scan for mentions by users of libraries online. I noted that Twitter in particular was effective due to its real time nature, and that unlike Google alerts, Twitter hits tend to be far more likely to be relevant, as the former often contained false hits, compared to a 140 character tweet which are more focused.
3 Techniques were suggested to find relevant tweets

1) Keyword match -e.g. NUS Library
2) Geolocation - e.g. Finding tweets about library within 1 km of your location.
3) Filtering based on person Tweeting - e.g. If user is following you and tweets about library, it's probably about you."

Keyword matching can be done in multiple ways but ways to do (2) and (3) is a bit more difficult. This blog post will share some new tools and techniques I discovered to do them.

Geolocation

Finding tweets based on your location can be done via the Twitter search , using advanced search operators near and within , one can find tweets matching a keyword within x km of a certain location.

You can then get a RSS feed of that and stick it into your RSS feed reader or a dashboard style service like Netvibes or igoogle. But as you know most RSS feeds have a time-delay of at least 15 minutes or more.

But what about something more real-time, that popups a alert?

In theory, as we are using standard search operators in Twitter, any Twitter client that supports twitter searches should work okay with near and within operators. To my surprise that's not true, at least for desktop clients.

It doesn't work for TweetDeck, or practically any twitter desktop client i have tried (it appears to work for some mobile twitter push apps like itweetreply).

The exception is the recently updated HootSuite. As mentioned in my recent round up on Twitter tools "Library twitter account - what tools are you using?" this is a free enterprise level service with many advanced features for organizations that want to manage their social media accounts including Twitter.

They recently added geolocation scans, though by default it provides a search within 5 km of your location, you can change this.

You can easily add twitter searches ("streams") to Hootsuite by clicking "add a stream", then click on the search tab, but the syntax you need is different.

keyword geocode:x,y,zkm

Keyword would be the keyword you want to match. e.g library

x,y should be replaced by the longitude,latitude of the position you want the scan. If you are not sure, search google map, and hover over the location of the map.

z should be replaced by say 1 km. So tweets matching the keyword within 1 km of x,y would be picked up.

Complete example

library geocode:1.29885,103.773508,1km



Nice thing about this? It's real-time search.

Don't like to use hootsuite? You can setup a search above, then embed it elsewhere. One possibility is embedding it in Netvibes or igoogle with all your other scans and searches (more about this in another post).



The nice thing is by embedding hootsuite this way, you get access to a real-time stream as opposed to embedding rss feeds which has a 10-15 min time delay.



Filtering based on person  


If you can identify your users , then you can increase the accuracy of your scans ,so you can pick up relevant tweets even if they don't use the full name of your library in the tweet, or aren't within a certain location. For instance followers of your twitter accounts are almost definitely your users. The interesting thing is even though they follow your account, it doesn't necessarily mean they remember to tweet you when they have library related requests.

Previously I talked about the listmonkey service, which allows you to get email alerts, when certain keywords are used by tweets in a Twitter list. This does this

Want something more real-time? Try JournoTwit . There's both a desktop and web version. But the key here is that you can setup a search over tweets from your followers. This is a real time search.


Just select "local" and keywords in search and you have a filter over your followers!


Filtering of RSS feeds locally


Despite the fact that RSS feeds are usually not real-time and there is a time delay, you often can't get away from using it.  Also it's easy to get the information you want buried with all the other RSS feeds.

You can of course, get a RSS feed of tweets from those you follow. But you will need to filter it to show only relevant tweets. There are many ways to filter rss feeds but they typically involve using a remote server which further delays the output.

Much better is to pull it into your rss feed reader and filter it locally.

I usually use Google reader, and it seems there is a  greasemonkey that highlights feeds that match keywords.

But recently I came across GreatNews , a local RSS reader.  It looks pretty interesting but probably the best feature is a "News watch feature"



News Watch allows you to create a special "News Watch" over Rss feeds you are subscribed to. The nice thing is, it even produces an alert when an item matching the search occurs!




The issue here is that my preferred RSS feed reader is now Google reader. Sadly GreatNews does not synchronize with that although it does so with Bloglines.

If you have all your feeds already setup nicely elsewhere, what you can do though is to export feeds from Googlereader to OPML, and then import it into GreatNews to save time.


Conclusion : No perfect solution 


Ideally, you want a solution that does a real-time scan of tweets to find tweets that


1)  match a keyword e.g nus libraries
2)  match a keyword (more generic than in #1 e.g library) and is within x km of a specified location 
3)  match a keyword  (more generic than in #1 e.g library) and is from either your followers or better yet users on a arbitrary twitter list 


No such solution exists to my knowledge

Tool/Service Keyword search Geolocation search Search within followers/following   
TweetDeck   Real-time No No
HootSuite Real-time Real-time No
JournoTwit Real-time No Real-time 
Twitter (native) via RSS via RSS via RSS  & GreatNews


Or does anyone know of a solution that does this?


Sunday, July 11, 2010

Extracting metadata from pdfs - comparing EndNote,Mendeley,Zotero & WizFolio

Note :This was blogged in July 2010, since then most of the reference managers have improved substantially so the information here can be considered outdated.

Interest in reference managers is increasing due to the increased competition in the area. Martin Fenner is pretty well known for among many things his reference manager overview , while Owen Stephens over in the UK has organized two conferences so far with the title "Innovations in Reference Management"

The latest reference managers are not simple citation/reference managers, but try to take into account web2.0 trends, allowing sharing of references and recommending articles, taking into account workflows, (working with institutional repositories etc), but I think I'm most interested in the most basic functions of a reference manager.

Namely,  how easily does it allow you to import references?

There are many methods that can be used to add references to reference managers but recently EndNote X4
added the ability to add bibliographic data from the pdfs you download.

This feature which was already available for MendeleyZoteroWizfolio  is very useful for users who have a bunch of pdf articles but have being manually creating references in the past. To start using the reference manager, they just point the reference manager to the folders with the pdfs and you automatically get the correct bibliographic data with the pdfs linked to them.

It's not as easy as that of course. There are various methods to figure out the bibliographic data from the pdf article, from extracting metadata that Publishers have embedded into metadata (including doi) into the pdf, cross-checking with other databases such as Google Scholar (Zotero), PubMed (WizFolio) ,  or possibly some sort of crowdsourcing of the correct info (Mendeley). If all else fails, the reference manager can try to "guess" from the text based on location.


So how good are they at figuring out citations from pdfs?


I ran two simple non-scientific tests. First, I went to Scopus, searched for the term wikipedia, ranked the results by relevance. I then located the full-text via the "Find at publisher" button and downloaded the pdfs of the top 10 results into one folder


Here's the steps for each reference manager


EndNote X4 : Select File->Import->Folder


Mendeley Desktop 0.9.7.1: Add documents -> Add folder - [edit] I was not logged into my Mendeley account when I did the test. The results might be a lot better when logged in.


Note: I did not use the optional function to search Google Scholar to pull in results.



Zoterio 2.0.3 : Link to file -> Retrieve metadata for PDF


WizFolio (version as of 11 July 2010) : Add -> Upload file . Note I did not use the optional "locate bibliography" function, but as it searches Pubmed, I doubt it would make much difference to the results.


Once the citation was obtained in the reference manager, I checked them manually either against the "official citation" from the publisher or by eyeballing the actual pdf. 





Article
title
EndNoteMendeleyZoteroWizFolio
Reengineering
the Wikipedia for Reputation (ScienceDirect)
PASSPASSPASSPASS
How and why
do college students use Wikipedia? (Wiley)
PASSPASSPASSPASS
Personality
characteristics of wikipedia members (Liebertonline.com)
PASSPARTIAL PASSPASS
Toward an
epistemology of Wikipedia  (Wiley)
PASSPASSPARTIAL PASS
Wikipedia-assisted
concept thesaurus for better web media understanding (ACM DL)
FAILPARTIAL PASSFAIL
Labeling
news topic threads with wikipedia entries (IEEE)
PARTIAL PARTIAL PASSPARTIAL 
Annotating
wikipedia articles with semantic tags for structured
retrieval  (ACM DL)
FAILPARTIALPASSFAIL
Is Wikipedia
link structure different? (ACM DL)
FAILPARTIAL PASSFAIL
The
importance of link evidence in Wikipedia (Springer book chapter)
FAILPARTIAL PARTIAL FAIL
Understanding
the wikipedia phenomenon: A case for agent based modeling (ACM DL)
FAILPARTIAL PASSFAIL
Total4 Pass, 1 Partial, 5
Fail.
3 Pass, 7 Partial8 Pass, 2 Partial4 Pass, 1 Partial, 5
Fail


PASS = 5 main fields (see below) are correct
FAIL = No info in the 5 fields extracted or all wrong info
PARTIAL = At least 1 of the 5 fields are correct



The 5 main fields I checked were

1. Article title
2. Author
3. Publication Year
4. Journal or Conference & vol/issue
5. Page number

Initially, "Partial" was further qualified by which fields were missing, present/correct, present/wrong, present/incomplete but it got complicated fast! Since, I'm not writing a journal article, I'm going to keep things simple.

For our purposes here, "partial" means at least 1 field correct. 

Of course this  means "partial" could mean very different things. 

There are "partials" where almost every field is correct except for either a minor error in the title, or more often, lacking one field info (Source, Conference field typically). 

And there are "partials" where only the article title is correct, the rest could be missing or even wrong. 

Oh well as I said, this is a totally unscientific test. 

Comments on results

EndNote
The video on new features in EndNote states that this features works only on official vendor pdfs, and pulls info via crossref/doi. In general EndNote plays it safe, unlike say Mendeley or WizFolio when it produces a citation, it almost always 100% correct. There's one exception, where it doesn't seem to handle conference proceeding properly, earning a "partial" because it thinks the citation is a journal, and hence lacks information for the conference. 

Mendeley

[edit] I was not logged into my Mendeley account when I did the test.  A quick retest shows that when logged in,  Mendeley seems to be automatically? pulling in data from other sources (crossref?), results will then be on par with Zotero in this test. Will investigate. What follows below is based on not being logged on. 
I'm not sure how Mendeley handles this, but my test of the 10 papers above, show that it is almost able to get something for sure though it's not 100% accurate, resulting in many "partials". This can vary from cases where it gets its almost completely correct except for the journal/conference field being wrong or missing, or to be almost totally incorrect, with many missing or error fields and the article title being correct only.


Zotero
In this simple test, Zotero seems to do the best. It seems to be similar to EndNote extracting the metadata via DOI (There's one anomaly - the 4th title where it gets it almost right, except the title is messed up with crossref xml tags) , but it outdoes EndNote, by producing almost perfect bibliographic data for an additional 5 entries. 


The trick seems to be that Zotero pulls records from Google Scholar. Unlike EndNote, Zotero also seems to be able to recognize article types other than journal articles, correctly using conference paper types with the correct field info.


WizFolio 
I get the impression WizFolio is using the exact same technique as EndNote, except for ones without dois, it tries to "guess" like Mendeley. It doesn't do so well, at best it gets the title right, but it almost always gives wrong year of publication, issue or paging when it tries.


In general, EndNote seems to be the baseline results. Zotero does the best because of Google Scholar intergretion. Mendeley and WizFolio gives mixed results, of the two Mendeley seems better but both sometimes give wrong information. In terms of publishers, Wiley and ScienceDirect articles seems to be the easiest to handle probably because metadata/doi is embedded in pdf? 


These are the "official" pdfs from the publisher, so in theory, figuring out the citations would be easy. The next simple test I did was to do a search in Google Scholar. The idea here was to see how well the reference managers did with conference proceedings, "unoffical" preprints etc.


Article
title
EndNote Mendeley Zotero WizFolio
The
Wikipedia XML corpus (ACM DL)
FAIL PASS PASS FAIL
Computing
semantic relatedness using wikipedia-based explicit semantic analysis
(aaai.org)
FAIL PARTIAL  PASS PARTIAL 
Semantic
Wikipedia (ACM DL)


FAIL PARTIAL PASS FAIL
WikiRelate!
Computing Semantic Relatedness Using Wikipedia (aaai.org)
FAIL PARTIAL  PASS PARTIAL 
Measuring
Wikipedia (hapticity.net , preprint)
FAIL PARTIAL  PASS FAIL
Total 5 Fail 1 Pass, 4 Partial 5 Pass 2 Partial, 3 Fail



As you can see, not very well. EndNote totally fails. While WizFolio and Mendeley barely fare better. Zotero is the star here, but again given the Google Scholar link this is not surprising.

When logged into my Mendeley, results are not very different, with 1 more PASS for 3rd item.

Conclusion

Let me stress again, this is a test I did out of curiosity. A sample of 15 articles clearly isn't enough to provide anything but my subjective impressions. For one thing it covers only articles mostly from ScienceDirect, Wiley and ACM Digital Library. Results will differ in particular if you get papers from say JSTOR.

Also due to the search topic used, understandably the articles pulled up are all new (2005 and after), the results will probably differ a lot if articles used were say from the 90s or even older, as the pdfs available would be different (no metadata embeded or worse just scanned pdfs).

I'm probably going to have to do another test with this, but the last time I tried, the hit rate was dismal for older articles even those in major databases like JSTOR. Another interesting avenue would be to look at stuff on preprint servers, open access servers etc.

Note it goes without saying, if I were doing medical articles and used Wizfolios "locate bibliography" function to clean up citations, I'm sure the results would be excellent.

Thursday, July 8, 2010

A mixed bag of ideas

As you can probably tell from the blog name "Musings about librarianship", I clearly didn't give much thought to what I would be blogging about when I first started blogging in March 09 as the title is generic enough to cover almost anything about librarianship.

The tagline which I added much later "Keeping track of interesting and cool ideas that might be used by libraries for benefit of users." is marginally more enlightening. 

So what exactly can you expect from my blog? It has being called a "tech blog" and it's pretty random, but believe it or not there are some common themes.  

My blog posts are typically of 2 types 

1) A survey of what libraries have done in a certain area. This could be how libraries are using a new service or technology such as Twitter, but can also include areas that are already considered "old hat" e.g. library portal design.

2) Some ideas that I have being using myself for my library work, or might consider doing

Many posts are a combination of both of course.


In this post, I will try to point out some common themes in my blog posts so far, and how some blog posts actually form a series of interlocking ideas.


1. A survey of what libraries have done  

Typically, my blog ideas start by going.. "Hmm can this idea (inspired by a new tech service I've read) be used by libraries?". I then look around to see if libraries have already done it, and if several of them have done the idea, I do the research , compare and put them into a blog post. 

Other posts involve comparing similar services and musings about the pros and cons of each.


This class of posts tend to be the long and take time to write due to the research involved. But they can be popular because they give you a summary on how libraries are using various new tools.


Early ideas

My post surveying the widgets used on so called subject 2.0 guides was probably one of my earliest post that was popular as many libraries were and still are considering creating dynamic subject guides on various platforms such as netvibes, libguides or even pure static html pages.


Some widgets used on a library Netvibes page



In many ways though, my 3 part series in March 09 musing about how libraries are using opensearch plugins, custom toolbars, bookmarklets  to enable quick access to library resources directly from the browser without going through the library portal was actually the paradigm for the other posts in this category. 



 Some opensearch addons for databases



"Accessing library catalogue & databases on your Mobile phone" is a recent attempted update for mobile phone users. 




Survey of Libraries on Twitter

In 2009, Twitter took off, and many libraries began to setup Twitter accounts. I setup a "Twitter League" of Libraries (over 600 accounts) which automatically tracked various statistics like following, follower, updates, age of account. 


 Top 10 libraries on Twitter by followers from Twitterleague


Using this data and more, I spent a lot of effort on a series of quantitative posts analyzing how libraries were using Twitter including "Official Library Twitter accounts- what factors are correlated with number of followers?"

(Also see here and here and here and here)  . 


Here's one example, a pie graph showing follower/following ratios, which I think is interesting since it sheds some light on the issue of whether library account following patterns.






More recently, in "Library twitter account - what tools are you using?"  I surveyed a bewildering number of tools that libraries could use to manage a Twitter account and  laid out my thinking on the different factors (e.g purpose of account, manual vs auto-tweeting, needed speed of response, intergretion with other social media, preservation, analytics)  affecting the class of tools you might want to use.




Aspects of library portals


Altough  "What are mobile friendly library sites offering?" (see later) was extremely popular, I also have a series of posts comparing features of conventional library webpages. 

"Using RSS feeds to distribute library news - 6 ways" and the related "Libraries and Google Calender" resulted from me wondering how libraries were handling the display of news events on portal page (RSS or Calender widgets). 






Shorter pieces covered library portals that allowed users to customize the portal and how libraries were creating interactive floor maps.






Customizable library page



Interactive floor maps


Mobile related posts

Since I bought a iPhone in Dec 09, I became very interested in mobile, and this resulted in a number of mobile related posts.




Perhaps the most popular post, I have done so far is "What are mobile friendly library sites offering? A survey." which is a summary & analysis of the features I found while looking at 40+ mobile library sites. This post was actually mentioned on AL DirectOCLC Abstracts was cited in ALA OITP Policy brief  !




Almost equally popular was  "iPhone apps for librarians" , where I listed almost every library related iPhone app  I was aware of. Basically a quick start for librarians who had just acquired a iPhone and wanted to see what library related apps they could use.


In many of these cases, further developments has resulted in making the content moot (typically in cases which the service never really took off or died)


2. My own ideas

I have many off the wall ideas, so sometimes I muse about how new tools or services could be used as few or no libraries have used the ideas. In other cases, these are personal tricks I use myself. These "how to" posts tend to be hit and miss though I hope they inspire people to try new ideas.


Use of RSS feeds


One main "strand" running through my blog posts involves using RSS feeds to organize & track information.


You can see this early on in April 09 "Rss feeds, Library databases and yahoopipes" where I talked about how to get around problems when using Ezproxy with RSS feeds. (Also see 
"Adding ezproxy to the url - 5 different methods")


Being able to handle RSS feeds via Ezproxy made  "Aggregating sources for academic research in a web 2.0 world" possible where I pointed out that there were a myriad list of RSS feed sources one should keep track of as a researcher beyond just tradition database or ejournal sources. A slight improvement for users using Google reader as their feed reader was suggested here.

RSS feeds can be consumed via conventional RSS readers like googlereader or dashboard like services like Netvibes or Igoogle. In "An information dashboard for your library service points" , I suggested that librarians at the desk should consider using a dashboard style service to keep track of important information they need to know.


 Example of dashboard to be used at desks


Use of RSS feeds can quickly lead to information overload, so I explored using Bayesian filtering techniques to filter RSS feeds

Of course, the goal here is to get information to travel to you, and RSS feed is only one method. In "Getting information to travel to you on your mobile phone" , I mused about how one could be informed of desired updates on a smartphone via email, IM, SMS and app notifications.


Presentation and personal ideas



Some of the ideas I blogged about, I eventually used in my day to day work.  This include how I use slideshare widgets , as well as how I create custom search box widgets for almost every database (a series of posts, but this is the final one talking about not only creating widgets for any database search but also how to use google analytics to track usage.)

I've have also mused about workflows at the reference desk, and the use of presentation tools such as zoomit, Pptplex, Prezi  etc. A later 2010 post talks about presenting using iphone apps such as MyPoint I also blogged about and currently use Tungle for scheduling 




A presentation using Pptplex


In "Using library 2.0 tools for technical services" and "Some email ideas for library use - LibX and Xobni ,  I advocated and shared my experience on how library 2.0 tools could actually lead to productivity gains when used by technical services staff . My thinking was that they benefited most from efficiency gains from using such tools as they carried out many repetitive tasks. This was also eventually implemented by other libraries, though I'm not saying they got the idea from this blog of course.




Scanning twitter - environment scanning ideas


But I'm perhaps happiest with my series of posts musing on using free tools to do environment scanning of online comments about the library. My early posts about what libraries were doing on Twitter in early 2009 eventually led to me setting up a Twitter account for the library, and this eventually allowed me to try to scan for and respond to library mentions on Twitter.



I first mentioned the idea in Dec 2009 , and followed it up with "Scanning mentions of the library - Twitter, Google alerts & more" which talked about how one could go beyond pure keyword searching to find relevant tweets by finding relevant tweets based on user location & whether they following you. "Environment scanning for libraries - Facebook" , talked about extending this to public facebook status updates.

And finally most recently with the kind permission of my library superiors, in "Why libraries should proactively scan Twitter & the web for feedback - some examples" , I gave concrete examples of how such techniques can delight users by providing service recovery to unhappy users who would otherwise have being missed.

Example of tweet showing positive effects of proactive scans

Mixed ball of ideas

I have some pretty wild ideas that didn't pan out of course, including "Adding your library catalogue results next to Google?" as well as "Creating 3D worlds from webpages using ExitReality"



I also experimented with simple mashups, including "Dipity for libraries" , "Mashup your Library's Twitter, Flickr, Youtube, Facebook accounts!", where you could use simple tools to create stunning visualizations or presentations from your social media content.


Other ideas such as "Location based services/pages your library should claim or monitor" where I suggested libraries should claim or monitor sites such as FourSquare, Yelp, Google local places, is probably too new to be assessed since Location based services are still new. 

Similarly too new to assess are my experiments with using iPhones to scan barcodes in "How to check your library catalogue by using your IPhone as a free barcode scanner - ZBar" (follow up post covers RedLaser).

"Are your patrons using CardStar iphone app as their library card?" , caused a bit of a stir on Twitter among librarians when it was discovered that for many public libraries, users were using an iPhone app as a replacement for their library cards.





"Sharing links with users - 8 different ways" was also something I currently don't use much but sets out my thinking of the possibilities available when sharing resources with users.




What's next?

Wow! That's quite a mixed bag of posts.

Chances are I will revisit many of these posts when I think of new ideas, learn of new tools, or if enough time has passed that a resurvey is worth while (e.g Libraries on Twitter). But I'm hoping to not repeat myself too much. Will probably stop posting if that happens.

Some topics I've being thinking of exploring in the future include reference managers (a comparison or clever uses), libraries on facebook (not a new topic I know), faq systems on libraries, and perhaps sharing more novice tips to those new to this brave new world.

I'm willing to accept requests on what you would like to see. :)




Share this!

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Related Posts Plugin for WordPress, Blogger...