Currently, Graph Search does not complete directly with Google and is not intended for general “informational”, keyword-based web searches. Rather, the idea is to tease interesting data out of social graph connections and relations, such as “who as likes to mountain bike where I work”, “what games do my friends play” and “best pizza place near me”. The Google+ “Search Plus Your World” product has a similar goal, enhancing search results with information from Google+, but it doesn’t support relational queries. The number of brands and restaurants on Facebook actually positions Graph Search more as a competitor to Yelp and Foursquare than Google.
Facebook has not announced a monetization strategy for Graph Search yet (to the disappointment of shareholders), but one obvious route looks to be selling search ads that are targeted by combining “search intent” - which is what Google has built it’s business on - with Facebook’s valuable social data.
Privacy is an immediate concern with Graph Search, so much so that Facebook tried to allay in their initial announcement. The effectiveness of social search relies of the amount of public data users share, putting it at odds with user’s privacy needs. It will be interesting to see how this plays out.
It is early days still and speculation about Graph Search is rampant. But it will be a valuable discovery tool for businesses with Facebook Pages and Applications, and because of that Facebook has already released a few resources that provide a basic outline of how the search will work, and how businesses can optimize for it.
The goal of this post if for me to synthesize the changes for my own understanding, but I thought I would share it with the world as well. I will hopefully update this page as more information becomes available and my understanding improves.
Facebook says Graph Search can find ”people, photos, places and interests”. In technical terms, the goal of the Graph Search is to return Open Graph Object results, based on their connections to other objects. There are many core Facebook Graph Object types, but the ones Facebook is saying it will focus search on are:
If Facebook can’t find any relevant Object results, it will fall back to show web results from Bing.
Facebook does not currently index and return results for Posts, Status Updates, or custom content generated by Applications. They have plans to in the future however.
There are also general Open Graph Objects used by Apps and websites. These are the objects outside of Facebook which have been created through the Graph API, or with Open Graph meta tags on HTML pages. It’s unclear to me if any of these items are being indexed and shown in results yet, but they will at some point. These built-in types are:
Details are still scarce on this, but we at least know high-level object attributes like ‘name’, ‘about’, ‘description’ and ‘category’ will be the most important search data. You can read more about the key Page attributes in the documentation, but here is a summary of the key Page metadata:
Good general SEO practices should suffice for this content. Make sure you clearly describe your Page using relevant keywords, and categorize it properly. If it’s a Place Page with a physical location, make sure the address is correct so location or “geo” searches will work properly.
If you have a Facebook Application, the Developer blog recommends that you make sure the following “App Info” fields are up-to-date in the “App Details” settings:
Each type of graph object has different attributes, so it will vary. For instance the Facebook Group Objectis much simpler than the Facebook Page, with just two key fields:
If you are optimizing content on your website or blog for Graph Search, refer to the documentation for the built in Graph Objects and define your meta tags appropriately. Remember that Facebook has a graph object debug tool you can use to double-check your content.
Facebook has stated that “all results are unique based on the strength of relationships and connections” in a user’s social graph. This means the search rank incorporates the data shared with a user by their friends, plus any public data from users who are outside of the user’s personal graph. So in addition to relevance from matching meta data on Facebook Graph objects (described above), other social indicators are taken in to account. It will be a similar algorithm to EdgeRank I imagine, including Likes, Check-ins and other social actions. The implication is that Objects with many social interactions will be ranked higher, and Objects with social interactions from the searching user’s friends will rank even higher.
It means the buzzword “GSO” (Graph Search Optimization) has already been coined! But in all seriousness, Facebook will rank results by overall social engagement: check-ins, page views, comments, etc. Optimizing a Page will mean filling out the metadata, and running an active Page that engages users with fresh, relevant content and frequent social interactions. These are the same recommendations that have always been made for running an effective Facebook Page, no surprises here. I would start by following techniques for improving EdgeRank until more specific data about Graph Search ranking comes out.
As mentioned above, if Graph Search fails to find enough results in the Graph, it will fall back to Bing search results from the web. This implies that it will also be beneficial to make ensure related web properties (e.g. a website) are properly optimized for Microsoft’s Bing search engine. The Bing blog has an official post with examples of Facebook/Bing integration.
]]>fetch()
method, which fires off an Ajax sync
call. You can bind a callback method to a Collection’s reset
event, which Backbone provides. It was my impression that the change
event on a Model would do the same thing. But when I bound to the Model’s change
event in a View, it did not work as expected. Here is the code that was not working:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
NOTE: the .on()
method was formerly called .bind()
apparently, so some older online examples use this.model.bind()
, and this still works, presumably calling the underscore.js bind()
method?
Well, it turns out I was making a simple mistake about one of JavaScript’s trickier points: this
context in the callback function. A substantial amount of reading later (this article in particular was helpful) I realized I needed to pass in the this
context as a third parameter to the on()
method. There is actually a whole section on this
in the Backbone documentation.
More research showed me that since Backbone version 0.9.9 a new method was added to make it more foolproof to bind callbacks with the right context: listenTo()
. This method is on the object that has the callback, instead of on the object you are binding to. This way it assumes the correct context and you don’t need to worry about binding it correctly. Below is the working method of binding to a successful Model load, with the new listenTo()
syntax.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
But before I figured that out I discovered two other ways to bind callbacks to successful Model and Collection fetch()
Ajax calls. So I’ll share those here as well. Both of them use jQuery functionailty, so if you are using Backbone with anything besides jQuery (like Zepto?), YMMV.
The second method is to pass a callback via option parameters into the Model’s fetch()
method. fetch()
passes these parameters on to the $.ajax()
method, so you can pass in options that $.ajax()
accepts. This includes the success
callback option. The trick here, as with the first technique, is making sure the callback has the correct this
context. You can do this with the Underscore _.bindAll()
method, which works some magic that is not quite clear to me (although this StackOverflow post helped).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
The final method uses jQuery’s Deferred object, which is a powerful tool for making JavaScript more synchronous when you need it to be. I got the idea from this blog. This article is also helpful for understanding Deferred. Basically objects that implement Deferred (like $.ajax()
) return a “promise” object, which takes in callbacks via the done()
method and queues them up. The queue of callbacks is run when the asynchronous method completes.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
These last two methods are particularly useful when you only want to bind to the initial Model load event, and not subsequent change
events (i.e. change:attribute_name
) which are fired every time model.set()
is called. For instance maybe you have some initial view rendering you want to do when the model first loads, and then have separate callbacks to update the view when model attributes change.
Here is a JsFiddle of all three Backbone Model fetch() bind methods to play with.
Using jQuery Deferreds in Backbone is great for binding single callbacks that depend on multiple Models loading. It is much cleaner than chaining a series of callbacks, and allows the load events to run in parallel. You can easily do multiple-dependency callbacks with the when()
method, like so:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
|
Here is a running JsFiddle of binding to two Model fetch() calls
Deferred is, as I said, quite powerful and I’m happy to have it in my tool belt now (even though I solved my original Backbone issue without it).
(Want to use Deferred style JS on more than just jQuery AJAX objects, or without jQuery at all? Sam Breed created a plugin/mixin to enable Deferred functionality in Underscore.js. Might be worth checking out if you are battling with asynchronous issues.)
]]>First let me just say that deploying with any kind of version control system (VCS) is a nice step up from deploying with an old school file transfer (e.g. FTP). Here are couple of reasons why deploying with Git is nice:
There are a couple drawbacks to using pure Git to deploy however:
.gitignore
in any empty directories you want to deployFor many purposes however it is a nice, simple solution that falls between basic file transfers and more advanced deployment options (mentioned at the end).
git pull
This was the first and simplest thing I tried. If SSH is set up with a key so there’s no need to enter a password, it’s actually pretty quick. cd
in to the repo and git pull
to update your code - just like you would locally. You are already logged on to your server in case you need to do anything else too. Rolling back is as easy as git checkout
.
But this is not as automagical as we can get with Git. You can do all this with SVN! Moving on…
Now we are getting somewhere - pushing code without SSHing on to the server! This is one of the beautiful things about Git: it’s distributed (decentralized), so you can set up a repository on another server and push changes to it, just like you push changes to your central code repository. Even better, you can set up multiple remote repos, so you can easily push code to testing and development environments as well.
We are going to do three things:
.git
files are not accidentally made public on the server. The separation also makes it easier to do things to your code without messing with your repository (like make backups or symlinks, or delete it).Note: The following tutorial assumes you have your code already in a Git repo on your local machine, and you can SSH to your remote server.
To get started, SSH on to the remote server you want to deploy to. Then set up a bare Git repo on the remote machine, like so:
$ mkdir website.git
$ cd website.git
$ git init --bare
This creates a new empty repository in the website.git
directory. Then create a post-receive hook by copying the sample one, and open it with your favorite text editor:
$ cp hooks/post-receive.sample hooks/post-receive
$ vim hooks/post-receive
Add the following line to the post-receive
file (it should have #!/bin/sh
at the top):
GIT_WORK_TREE=/path/to/webroot/of/website git checkout -f
This file will run after the repo has received (post-receive, see?) new code from a git push
. It will checkout the latest code that was pushed (-f
forces it to, even if there are local changes) into the GIT_WORK_TREE
we are declaring (which is our website webroot).
I found another version of this technique, which does things a little differently. It sets the work tree (your webroot) in the config file, instead of setting it in the hook script. And instead of using a bare repo, it sets the denycurrentbranch = ignore
option, which basically overrides errors normally caused by pushing to a non-bare repo. I’m not positive why this is better, I’d love to hear if anyone knows!
So for this technique, after setting up your bare repo you just adjust the Git config like so:
$ git config core.worktree /path/to/webroot/of/website
$ git config core.bare false
$ git config receive.denycurrentbranch ignore
Then your post-receive hook looks just like this (since the worktree is already set):
git checkout -f
I found a very in-depth article by Sitaram Chamart which outlines 6 different ways to deploy using a Git post-receive
hook, which is worth checking out. It covers the main checkout
technique I’m showing here, plus a few others. For instance one method uses fetch:
cd /path/to/webroot/of/website
unset GIT_DIR
git fetch origin
git reset --hard origin/master
And a few methods use the Git Archive dump feature, for instance:
git archive master | tar -C /path/to/webroot/of/website -xf -
Really, check out the article, it’s great.
Regardless which of the two techniques you use above, make sure your post-receive hook is executable so it will run properly:
$ chmod +x hooks/post-receive
Now add a new remote to your local repo:
$ git remote add deploy git@myserver.com:mywebsite.git
Now deploy your code with a push command! (This assumes you want to push the master
branch to this remote).
$ git push deploy +master:refs/heads/master
Now that the default head has been set you can push with the following, shorter command:
$ git push deploy
git-receive-pack: command not found
errorI got this error when I git push
ed to my remote host? On my cheap shared host the $PATH for my user doesn’t load for some reason, so it cannot find git-receive-pack
. An easy way to fix this is to set the uploadpack
and receivepack
variables in the local Git config for your remote. Run these commands locally to set them (and change “deploy” to the name of your remote):
$ git config remote.deploy.uploadpack /path/to/git-upload-pack
$ git config remote.deploy.receivepack /path/to/git-receive-pack
You may need to to a which git-receivepack
on your remote host to find the path you need to set. More info on StackOverflow and this blog.
Deploying with Git hooks is nice, and it’s met my needs so far. But there are more powerful tools available, like Capistrano and Fabric. I have not used any of these tools, but it looks like they allow you do the same one-line deployment commands (which also pull from your git repo), as well as providing extra tasks to adjust file permissions, clear cache directories, make database changes, etc. If you have a complex deployment process you want to automate I would suggest looking at one of these.
post-receive
hook to run extra deployment tasksWhat if you just need to run a few extra commands on your deploy? Or what if you already have a deployment script of some sort, but want to run it with an easy push
command?
On one of my projects I have a large Phing script which runs a number of deployment tasks. I call it after I checkout the new code from Git in the post-receive
hook, like so:
GIT_WORK_TREE=/path/to/webroot/of/website git checkout -f
phing -f /path/to/webroot/of/website/build.xml deploy-upgrade
You could simply add some additional shell commands, or call any other sort of additional deployment scripts the same way.
]]>It was 4 years ago that I first decided to blog about web design and development on my freelance web design site, Chili Pepper Design. I read some high praise for the Textpattern PHP content management system, and installed it. It worked well, but had started to feel dated of late. And besides, none of the cool kids on Hacker News ever talk about Textpattern any more. ;)
So what are the cool kids talking about now? What is the new hotness? Ruby-powered static Jekyll blogs of course! Thus you see before you the new Chili Pepper Design blog, powered by Octopress (a fork of Jekyll).
Actually it’s not that new or hot anymore, but here are a few reasons why I chose to go the Jekyll/Octopress route regardless:
Next up I’m going to stretch out my old blogging muscles with a post about how I migrated my blog from Textpattern to Octopress - on windows (dun dun dun).
Cheers!
]]>In the newest version of the Magento ecommerce platform, Magento 1.5, they have redone their update script. In Magento Connect 2.0, what was called “pear” is now “mage”.
Old and busted:
./pear mage-setup .
New Hotness:
./mage mage-setup .
They basically work the same, but the name has changed and – more importantly if you are reading this – there is no windows batch file to run. There is a Bash shell script, so you can download and update Magento and it’s many Extensions via the command line in Linux, but not Windows.
So, looking at the mage.sh script and working from the pear.bat script, I made mage.bat. It lets you do command line downloads and updates of Magento in Windows, just like you can on Linux.
Not that anyone is actually hosting Magento on a WAMP setup, but I find it useful to be able to download and install Extensions and modules on my Windows development box.
Download mage.bat – Windows Magento Connect 2.0 Update Script
One you have mage.bat installed in the root directory of your Magento install (right next to the mage bash script), you are ready to go! Just run “mage” from the Windows command line.
mage help
The Yiero blog has a nice article about using Magento Connect 2.0 over command line.
Here is a good article about upgrading Magento from 1.4 to 1.5 as well.
TROUBLESHOOTING TIPS:
FBML is going away. Is the popular (and effective) strategy of “fan-gating” content to encourage visitors to Like you page going away with the fb:visible-to-connection FBML tag?
No! Facebook has been kind enough to provide a new method for implementing “reveal tabs” on the new iframe tabs. Follow along in the tutorial below to learn how to do this.
(This tutorial assumes you are creating a PHP Facebook application, but you can follow the spirit of it in any language.)
I won’t cover all of the details here, but you need to set up an iframe tab application on Facebook first. Assuming you have access to put PHP files up on a public server, follow these steps:
NOTE: There is some confusion about Canvas URL and the Tab URL. The Canvas URL is the PHP file that get’s hit when the users views the Canvas app. This is different than the PHP script which generates the Tab, which is defined by the Tab URL. The Tab URL has to be relative to the Canvas URL. What I usually do is have the Canvas PHP file be “index.php” and point the Canvas URL to the directory it’s in. Then I make a “tab.php” file for the Tab URL in the same directory.
Facebook now passes an encoded POST variable to tab applications, called the signed_request. The content of a tab’s signed_request is covered here in the canvas guide. General details about Facebook’s signed requests can be found here.
To get the signed_request, make sure the “OAuth 2.0 for Canvas” option is set in your application’s settings. Then just grab it out of PHP’s $_REQUEST array global.
$signed_request = $_REQUEST['signed_request'];
The signed request is encoded for security. Read more about this here.. We are not really interested in security on this app though. We just want to tell if a visitor is a fan or not. So we are going to do the “cheater” method of decoding the signed_request, which ignores some of the security features (like the signature). Here is the cheater code (you will need the PHP json_decode() method!):
function parsePageSignedRequest() {
if (isset($_REQUEST['signed_request'])) {
$encoded_sig = null;
$payload = null;
list($encoded_sig, $payload) = explode('.', $_REQUEST['signed_request'], 2);
$sig = base64_decode(strtr($encoded_sig, '-_', '+/'));
$data = json_decode(base64_decode(strtr($payload, '-_', '+/'), true));
return $data;
}
return false;
}
print_r(parsePageSignedRequest());
The signed_request on iframe tabs has a “pages” object, which holds a “liked” variable. If the user viewing your tab has Liked your page, it is set to TRUE. If they have not, it is set to FALSE. So:
function parsePageSignedRequest() {
if (isset($_REQUEST['signed_request'])) {
$encoded_sig = null;
$payload = null;
list($encoded_sig, $payload) = explode('.', $_REQUEST['signed_request'], 2);
$sig = base64_decode(strtr($encoded_sig, '-_', '+/'));
$data = json_decode(base64_decode(strtr($payload, '-_', '+/'), true));
return $data;
}
return false;
}
if($signed_request = parsePageSignedRequest()) {
if($signed_request->page->liked) {
echo "This content is for Fans only!";
} else {
echo "Please click on the Like button to view this tab!";
}
}
This new method is actually better than the old method in a couple of ways:
Good luck, and happy Facebook developing!
]]>The great thing about iframe tabs is that you are working with a real webpage now. No more fighting with FBJS and finicky FBML tags!
One of the interesting things is that the ubiquitous Static FBML app is going away too, apparently. This application has a been a great shortcut for many developers and page owners, allowing them to just install the app and paste in HTML and FBML code without creating a special custom application. (Hopefully more people will use the custom facebook tab creation service I built, SplashLab Social ;)
This will be a pain for many users. It’s a big leap to have to create a custom application and host the code on your own server! I will be curious to see if Facebook creates a new app that does some of the same things?
Naturally, like any new (and many old) Facebook features iframe tabs have bugs still. My preliminary tests turned up the following problems:
Iframe tab height: This is not a bug, but I DO think it’s a little weird and not very obvious. You can fix this with the FB.Canvas.setAutoResize() method, as detailed in the bug I filed.
In summary, the new iframe tabs are more powerful and flexible than the FBML ones, and open up a lot of great new possibilities. There are a LOT of FBML apps out there, however, so it will be interesting to watch the transition.
Please, post any comments or corrections you have below! Are there any other new features of iframe tabs you are excited for? Anything you will miss from FBML?
Update: SplashLab Social has been released! If you are looking for an easier way to set up iframe tabs, look no further! My new service makes it easy – check it out!
]]>
Update: The project is now live and ready for signups! Head over to SplashLab Social to learn more and get started. It’s the easiest way to create and manage content on your Facebook Page’s iframe tabs!
There’s something big in the works here at Chili Pepper Design I wanted to share on my blog quickly:
Right now the social networking site Facebook is quickly becoming the king of the web. Some sources claim that Facebook has passed Google as the most visited website on the web. For this reason Fan Pages are becoming an important part of the marketing strategies of small businesses and large corporations alike. Setting up a Fan Page is faster and cheaper than setting up a website, and allows customers to interact with the business/brand (and share it with their friends) in ways that traditional web pages do not.
These Facebook Fan Pages consist of a series of “tabs”. In addition to the Wall, and the usual Photos, Videos, Discussion and Reviews tabs, many businesses are adding custom FBML tabs. These provide additional information and a touch of that business’ personality and brand. These nicely designed tabs offer special promotions, games, contests, product information, shopping options, blog and twitter feeds, and much more. For some great examples of what can be accomplished with custom FBML tabs check out the Facebook Showcase.
The one drawback to these otherwise-awesome custom page tabs is that they require a bit of technical savvy to set up. Some require the creation of a custom application, and even making a simple tab with the Static FBML application requires FBML coding. There are many tutorials that cover how to set up custom tabs (like my own FBML tab tutorial here), but frankly it’s easier to set up an entire new website with services like WordPress.com and Blogger than to set up a custom FBML tab.
And even once a tab is set up, it is difficult to update it. The FBML code must be edited again, just like when it was first set up. This is one area where websites running Content Management Systems (CMSs) like Drupal and Wordpress are far ahead of custom facebook tabs. Editing a page in Drupal is as easy as composing an email – no coding needed!
But what if creating a custom FBML tab was just as simple as setting up blog on Blogger? Just sign up, choose a design template, enter the content, and you’re done?
What if updating a custom FBML tab was as simple as updating a page on a Drupal website? Just log in, change the content, and hit “Update”?
This is exactly the problem that Chili Pepper Design is working on addressing. With the Facebook Content Management System currently in development it will be easy as pie to create and update custom FBML tabs with nice templates on a Fan Page.
Here’s how it will work:
Here are some of the many components that the templates will have (in various combinations):
The templates will be professionally designed, optimized for SEO, and have the ability to customize some of the colors.
Are you a Facebook designer? Would like to provide your clients with an easy way to update the pages you have created for them? In addition to the standard templates, the Facebook CMS will offer a special product for designers to plug the CMS into their designs. The details on this have not been worked out, but it is a service I very much want to offer, along the lines of Cushy CMS or Surreal CMS.
Check out SplashLab Social to learn more!
]]>On one of my latest Drupal websites there are multiple bloggers, none of whom are particularly tech savvy. I set up the CMS with TinyMCE because I like how they can drag and drop images around (I just wish Drupal had an image embed uploader like WordPress!). After a while they complained that the Insert Flash button in TinyMCE just wasn’t cutting it. So I set about fixing it. How hard could it be to make a plugin that just inserts some HTML code where your cursor is? Like it already does for images?
It turns out it is very easy. I was able to just edit the TinyMCE “Example” plug-in, which already did basically exactly what I wanted. So here I provide it to you, Internet. I hope it is useful, and please let me know if it’s broken (or could work better). Thanks!
UPDATED: (9-17-2010) I fixed some bugs with IE. It should work better cross-browse now.
Download the plugin:
I found some tips on installing TinyCME plugins here: Installing Plugin Newbie Question
Here is the official guide to creating a plugin for TinyMCE 3: TinyMCE:Create plugin/3.x
To use this plugin, install it in the TinyMCE plugin folder and just include it in the TinyMCE.init() call like you would the other plugins. Example (“embed” is the name of the plugin):
<script type="text/javascript" src="<your installation path>/tiny_mce/tiny_mce.js"></script>
<script type="text/javascript">
tinyMCE.init({
theme : "advanced",
mode : "textareas",
plugins : "embed, fullscreen, emotions, preview",
theme_advanced_buttons3_add : "embed, fullscreen, emotions, preview"
});
</script>
Disclaimer: I am not a TinyMCE guru, and not even that hot at JavaScript. I probably cannot help you with other TinyMCE plugins, or even this one. I only got this to work because the Example plugin was so close to what I already needed. But thanks for stopping by!
]]>It also turns out there is a steep learning curve to making “Ajax forms” with the Drupal Forms API. I got it working, but it took a fair amount of effort. So, to help out future Drupal AHAH developers I am providing my code below, along with a list of links to resources that were a great help in unraveling this problem.
First, to help provide an “aerial view” of what’s going on here, this is a list of the components involved:
To start out, here is the example admin settings form ahahtestmodule_admin_settings with both fields (ahahtestmodule_types and ahahtestmodule_ahah_field):
<?php
function ahahtestmodule_admin_settings() {
$form = array();
$form['settings'] = array(
'#type' => 'fieldset',
'#title' => t('ahahtestmodule Settings'),
);
$form['settings']['ahahtestmodule_types'] = array(
'#type' => 'radios',
'#title' => t('First Field'),
'#description' => t('Change this field to change the options in the next field.'),
'#options' => array('one' => t('Option 1'), 'two' => t('Option 2'), 'three' => t('Option 3')),
'#default_value' => variable_get('ahahtestmodule_types', 'one'),
'#ahah' => array(
'path' => 'ahahtestmodule/ahahjs',
'wrapper' => 'ahah-wrapper',
'method' => 'replace',
),
);
$form['settings']['ahahtestmodule_ahah_field'] = array(
'#type' => 'select',
'#title' => t('Dependent Second Field'),
'#options' => ahahtestmodule_get_ahah_fields(variable_get('ahahtestmodule_types', 'one')),
'#default_value' => variable_get('ahahtestmodule_ahah_field', 'none'),
'#description' => t('This fields content depends on what is selected in the first field.'),
'#prefix' => '<div id="ahah-wrapper">',
'#suffix' => '</div>',
);
return system_settings_form($form);
}
?>
Next, here is the dummy function that gets the right content for ahahtestmodule_ahah_field based on ahahtestmodule_types:
<?php
function ahahtestmodule_get_ahah_fields($first_variable) {
$ahah_fields = array();
switch ($first_variable) {
case 'one':
$ahah_fields['one'] = 'Option 1 Was Selected';
break;
case 'two':
$ahah_fields['two'] = 'Option 2 Was Selected';
$ahah_fields['two_bonus'] = 'Bonus Option!';
break;
case 'three':
$ahah_fields['three'] = 'Option 3 Was Selected';
break;
default:
$ahah_fields['none'] = 'Please Select...';
}
return $ahah_fields;
}
?>
Then, here is the magic AHAH callback function that I don’t fully understand and ripped right off this article at drupal.org: Adding dynamic form elements using AHAH:
<?php
// The AHAH callback function
function ahahtestmodule_ahah_field_js() {
// The AHAH callback function triggered by the user changing the first field, "ahahtestmodule_types"
$form_state = array('storage' => NULL, 'submitted' => FALSE);
$form_build_id = $_POST['form_build_id'];
// Get for the form from the cache
$form = form_get_cache($form_build_id, $form_state);
// Get the form set up to process
$args = $form['#parameters'];
$form_id = array_shift($args);
$form_state['post'] = $form['#post'] = $_POST;
$form['#programmed'] = $form['#redirect'] = FALSE;
// Process the form with drupal_process_form(), which calls the submit handlers that put whatever was worthy of keeping in the $form_state
drupal_process_form($form_id, $form, $form_state);
// Call drupal_rebuild_form(), which destroys $_POST, creates the form again with hook_form, gets the new form cached and processed again
$form = drupal_rebuild_form($form_id, $form_state, $args, $form_build_id);
// THIS IS WHAT YOU WILL CUSTOMIZE FOR YOUR OWN FORM
// Choose the field you want to update with AHAH and render it
$ahah_form = $form['settings']['ahahtestmodule_ahah_field'];
unset($ahah_form['#prefix'], $ahah_form['#suffix']);
$output = drupal_render($ahah_form);
// Final rendering callback.
drupal_json(array('status' => TRUE, 'data' => $output));
}
?>
Lastly, be sure to add the menu callback for ahahtestmodule_ahah_field_js():
<?php
function ahahtestmodule_menu() {
$items = array();
$items['ahahtestmodule/ahahjs'] = array(
'page callback' => 'ahahtestmodule_ahah_field_js',
'access arguments' => array('administer ahahtestmodule'),
'type' => MENU_CALLBACK,
);
return $items;
}
?>
I rolled this whole thing up into a little demo module that does nothing except run all this code:
Download the Drupal Ahah Test Module
Here are some links that I used to figure this out that will hopefully help you too:
A big thanks to the Drupal community as always for putting so much helpful support up online for free!
]]>Granted, the title of this post is hyperbolic link-bait. ;) But it’s true nonetheless!
I’ve had this site, http://chilipepperdesign.com, up for probably five years now but only as a static portfolio site for most of that time. Then, a year ago, I upgraded the site and added a blog to it. In that year my monthly page views have increased 2,500% and the number of monthly visits has gone up 4,400%.
I don’t claim to be a great blogger. I’m not even a prolific blogger. And I had a very small amount of traffic before, so the percent gain is in some ways exaggerated. But it’s still an impressive change to me. The idea that casual blogging can increase traffic even just a 1000% would be impressive! But let’s get a little perspective:
So, take from this what you will, but it is a fact: blogging gives search engines what they need – content. When you provide a search engine with content you will be rewarded.
Happy blogging!
]]>But I wanted to have some of the View’s query parameters change dynamically based on my module’s admin settings. Although you could probably do this with some custom Views Argument PHP code, I didn’t want to do it this way. It divorced the code from my module too much somehow. It would mean that after I installed my module I would have to create a View and paste in some custom PHP code! Yuck.
Looking through the Views 2 documentation I learned how to create “default views” for a module, but still I didn’t like this approach. This would mean that the custom view would still appear in the Views list, and could be disabled, modified, and what have you. What I really wanted to do was just create a View programmatically in my module code. How hard could it be?
With Views 1 it was apparently easy to do this with the views_build_view() method, and I found many articles explaining how. But I am using Views 2, so these were of no help.
Some poking around in the Views code showed me the way, however, and it turns out it’s pretty easy after all.
Basically, all you need to do is create a view using the Views UI then Export it to get most of the code. You can’t quite use the exported code as-is though. You need to replace the first line of the export code ($view = new view;) with ($view = viewsnewview();). It basically does the same thing. Once you’ve replaced that line you can create a view anywhere you want in your module’s code. You can then execute, embed it, or whatever you want by calling the appropriate functions (like $view→execute_display(‘default’, array())). Here is a piece of example code using a simple view that displays the Title field of all Published nodes:
//create a new view
$view = views_new_view();
//define the view (this code was generated by the Export)
$view->name = 'test_date_view';
$view->description = '';
$view->tag = '';
$view->view_php = '';
$view->base_table = 'node';
$view->is_cacheable = FALSE;
$view->api_version = 2;
$view->disabled = FALSE; /* Edit this to true to make a default view disabled initially */
$handler = $view->new_display('default', 'Defaults', 'default');
$handler->override_option('fields', array(
'title' => array(
'label' => 'Title',
'alter' => array(
'alter_text' => 0,
'text' => '',
'make_link' => 0,
'path' => '',
'alt' => '',
'prefix' => '',
'suffix' => '',
'help' => '',
'trim' => 0,
'max_length' => '',
'word_boundary' => 1,
'ellipsis' => 1,
'html' => 0,
),
'link_to_node' => 1,
'exclude' => 0,
'id' => 'title',
'table' => 'node',
'field' => 'title',
'relationship' => 'none',
),
));
$handler->override_option('filters', array(
'status' => array(
'operator' => '=',
'value' => '1',
'group' => '0',
'exposed' => FALSE,
'expose' => array(
'operator' => FALSE,
'label' => '',
),
'id' => 'status',
'table' => 'node',
'field' => 'status',
'relationship' => 'none',
),
),
));
$handler->override_option('access', array(
'type' => 'none',
));
$handler->override_option('cache', array(
'type' => 'none',
));
$handler->override_option('row_options', array(
'inline' => array(),
'separator' => '',
));
// now output the view (or whatever you want to do with it)
print $view->execute_display('default', array());
I posted this over in the Drupal documentation as well.
]]>To do this I just had to add a little extra code to the initCallback method I was already using to bind the external controls. Here is the JS needed to unobtrusively create a jCarousel with 4 external control links:
<script type="text/javascript">
//this function creates the control links and binds them
function mycarousel_initCallback(carousel) {
// add the controls here
carousel.container.before('<div class="jcarousel-control"><a href="#">1</a><a href="#">2</a><a href="#">3</a><a href="#">4</a></div>');
// now bind the controls
jQuery('.jcarousel-control a').bind('click', function() {
carousel.scroll(jQuery.jcarousel.intval(jQuery(this).text()));
return false;
});
};
//create the jCarousel on the #mycarousel element with the initCallback above
jQuery(document).ready(function() {
jQuery("#mycarousel").jcarousel({
initCallback: mycarousel_initCallback,
});
});
</script>
I simply append the .jcarousel-control div before the carousel container using carousel.container.before(). You could just as easily add it after the carousel by using carousel.container.after().
I love how easy jQuery make progressive enhancement! Enjoy.
]]>This is a total hack, since I have not looked in to how to properly make a plugin for the new CKEditor, but it works pretty well. Maybe someone who knows how to make plugins to CKEditor can show me the way.
To start hacking, we need to first copy over the minified ckeditor\plugins\image\dialogs\image.js with the indented ckeditor\_source\plugins\image\dialogs\image.js, since it’s really hard to work with the minified image.js.
In image.js all of the tabs and fields in the Image popup dialog are defined in the “contents” array (below the onHide function). We will add our new “rel” and “title” fields to the content array, in the “Link” tab’s sub-array. I added the new code at the end of the Link tab’s definition array after the txtUrl, browse, and cmbTarget fields (around line 1146). The code I added is:
{
id : 'txtTitle',
type : 'text',
label : editor.lang.link.advisoryTitle,
'default' : '',
setup : function( type, element )
{
if ( type == LINK )
{
this.setValue( element.getAttribute( 'title' ) );
}
},
commit : function( type, element )
{
if ( type == LINK )
{
if ( this.getValue() || this.isChanged() )
{
element.setAttribute( 'title', this.getValue() );
}
}
}
},
{
id : 'txtRel',
type : 'text',
label : editor.lang.link.rel,
'default' : '',
setup : function( type, element )
{
if ( type == LINK )
{
this.setValue( element.getAttribute( 'rel' ) );
}
},
commit : function( type, element )
{
if ( type == LINK )
{
if ( this.getValue() || this.isChanged() )
{
element.setAttribute( 'rel', this.getValue() );
}
}
}
},
This adds a txtTitle and txtRel field to the Link tab on the Insert Image CKEditor dialog. Look out for commas! The first time I did this I missed a comma between the cmbTarget and txtTitle declarations, which borked everything.
The final thing we need to do is create the “editor.lang.link.rel” English translation definition so the new txtRel field is properly labeled in the dialog (I re-used the existing “editor.lang.link.advisoryTitle” translation for txtTitle). To do this, open up ckeditor\lang\en.js. I added the following snipped between styles and selectAnchor in the “Link” block, but just make sure it’s in the “Link” block of translations.
rel:'Rel',
You could probably hard-code this label, but I kept with the translation system. I didn’t do any labels except the English one, but you get the idea. If you are using another language insert the “rel” label into the appropriate file for that language.
You can now set the “rel” and “title” attributes on the image link, and use them with the Lightbox of your choice!
]]>This part explains how I set up my public web directories, and how I did the Nginx configuration files for a Magento site. One of the tricky things about switching to Nginx from Apache is that Nginx does not use .htaccess files or the usual Apache mod_rewrite rules for pretty/SEO URLs. You can replicate all of Magento’s mod_rewrite rules with Nginx’s own rewrite module, but it takes some getting used to and Magento doesn’t come with them pre-written (like the mod_rewrite rules in the .htaccess files).
Here are some of the resources I used:
First we’ll set up the directory structure like we are used to with Apache:
# mkdir /usr/local/nginx/sites-available
# mkdir /usr/local/nginx/sites-enabled
# mkdir /var/www/example1.com/public
# mkdir /var/www/example1.com/log
# cp /usr/local/nginx/conf/nginx.conf /usr/local/nginx/conf/nginx.conf-original
Then set up the base nginx conf file /usr/local/nginx/conf/nginx.conf:
user www-data;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
include /usr/local/nginx/sites-enabled/*;
}
Then set up the Nginx conf file for your website. I am using the default site in this example /usr/local/nginx/sites-available/default, but if you have multiple sites instead of default you would create files called site1.com, site2.com, etc.
This file is where all of the rewrite magic is. The url in this example is example1.com, which you should replace with your own. The document root is /var/www/example1.com/public which, again, you will need to change to match your own configuration.
# fastcgi nodes
upstream backend {
server unix:/tmp/fcgi.sock;
}
# redirect all non-www requests to www requests (it would be easy to reverse this)
server {
listen 80;
server_name example1.com;
rewrite ^/(.*) http://www.example1.com/$1 permanent;
}
server {
listen 80;
server_name www.example1.com;
# protection (we have no .htaccess)
location ~ (/(app/|includes/|lib/|/pkginfo/|var/|report/config.xml)|/\.svn/|/.hta.+) {
deny all;
}
# pass php files over to PHP-FPM via the socket
location ~ (\.php)$ {
fastcgi_index index.php;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
if (-e $request_filename) { # check if requested path exists
fastcgi_pass backend;
}
}
# the javascript compressor
location ^~ /js/index.php {
fastcgi_pass backend;
fastcgi_index index.php;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param SCRIPT_FILENAME /var/www/example1.com/public$fastcgi_script_name;
include /etc/nginx/fastcgi_params;
access_log off;
expires 30d;
}
# special case for the error "report" pages
location /report/ {
fastcgi_index index.php;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_NAME /report/index.php;
fastcgi_param SCRIPT_FILENAME /var/www/example1.com/public/report/index.php;
if (!-f $request_filename) {
fastcgi_pass backend;
break;
}
}
# pass everything else over to PHP-FPM via the socket
location / {
root /var/www/example1.com/public/; # absolute path doc root
index index.php index.html index.htm;
# set expire headers
if ($request_uri ~* "\.(ico|css|js|gif|jpe?g|png)$") {
expires max;
}
# set fastcgi settings, not allowed in the "if" block
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/index.php;
fastcgi_param SCRIPT_NAME /index.php;
# rewrite - if file not found, pass it to the backend
if (!-f $request_filename) {
fastcgi_pass backend;
break;
}
error_page 404 index.php;
}
access_log /var/www/example1.com/log/access.log;
error_log /var/www/example1.com/log/error.log;
}
I am no expert with Nginx, but a lot of Google searching and trial and error gave me the file above, and it seems to work. I hope it is helpful, even if this exact code doesn’t work for you.
Finally, activate the site by creating a symbolic link to it and restarting the server:
# ln -s /usr/local/nginx/sites-available/default /usr/local/nginx/sites-enabled/default
# ln -s /usr/local/nginx/sites-enabled /etc/sites
# /etc/init.d/nginx restart
And that’s it! You should now have Magento running on Nginx! (After, of course, also installing mysql and a sendmail program, and probably an FTP server, and who knows what else :)
Now, how to make it actually go faster than Apache? There are many settings you can tweak. Look in fastcgi_params to change PHP-FPM settings, php.ini to change APC settings, my.cnf to change mysql settings, nginx.conf to adjust the number of worker processes and the keepalive timeouts… the list goes on. Good luck!
]]>In this part, we compile PHP-FPM and get it working with the APC “Accelerator” cache for Nginx. The first step is to compile PHP-FPM. Some of this is taken almost verbatim from the FPM readme:
# aptitude install -y libxml2-dev libevent-dev libjpeg-dev
# export LE_VER=1.4.12-stable
# wget "http://www.monkey.org/~provos/libevent-$LE_VER.tar.gz"
# tar -zxvf "libevent-$LE_VER.tar.gz"
# cd "libevent-$LE_VER"
# ./configure && make
# DESTDIR=$PWD make install
# export LIBEVENT_SEARCH_PATH="$PWD/usr/local"
# export PHP_VER=5.2.11
# cd ~/sources
# wget "http://launchpad.net/php-fpm/master/0.6/+download/php-fpm-0.6~$PHP_VER.tar.gz"
# tar -zxvf "php-fpm-0.6~$PHP_VER.tar.gz"
# ./php-fpm-0.6-5.2.11/generate-fpm-patch
# wget "http://us.php.net/get/php-$PHP_VER.tar.gz/from/us.php.net/mirror"
# tar xvfz "php-$PHP_VER.tar.gz"
# cd "php-$PHP_VER"
# patch -p1 < ../fpm.patch
# ./buildconf --force
# mkdir fpm-build && cd fpm-build
# aptitude install libxml2-dev libbz2-dev libpcre3-dev libmcrypt-dev libmhash-dev libmhash2 libcurl4-openssl-dev libsyck0-dev libgd-dev zlib1g-dev
# ../configure --with-fpm --with-libevent="$LIBEVENT_SEARCH_PATH" --enable-mbstring --with-zlib --enable-zip --with-mcrypt --with-jpeg-dir=/usr/lib --with-gd --without-sqlite --without-pdo_sqlite --enable-fastcgi --with-curl --with-mhash --with-mysql=/etc/mysql/ --enable-pdo=shared --with-pdo-mysql=shared
# make all install
# make test
# aptitude install m4 autoconf
# mount -o remount,exec,suid /tmp
# pecl install apc
# cp /root/sources/php-5.2.11/php.ini-recommended /usr/local/lib/php.ini
# ln -s /usr/local/lib/php.ini /etc/php/php.ini
# ln -s /etc/php-fpm.conf /etc/php/php-fpm.conf
At the end of the commands above, you will notice we installed APC. Now, in /etc/php/php-fpm.conf there are four places you need to change the user and group to the correct ones for the web server. On my install, the user and group were both www-data:
<value name="owner">www-data</value>
<value name="group">www-data</value>
<value name="user">www-data</value>
<value name="group">www-data</value>
To set up some basic parameters for the PHP-FPM FastCGI install, create the file /etc/nginx/fastcgi_params. We will be importing this into the Nginx config files later. This is what I am using, but you may need to tweak yours for better performance:
fastcgi_connect_timeout 60;
fastcgi_send_timeout 180;
fastcgi_read_timeout 180;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_intercept_errors on;
Lastly, edit your php.ini file /etc/php/php.ini to enable PDO on mySql (which Magento needs) and APC. To do so, add the following lines:
extension_dir = /usr/local/lib/php/extensions/no-debug-non-zts-20060613
extension = pdo.so
extension = mysql_pdo.so
extension = apc.so
apc.enabled = 1
apc.shm_size = 96
apc.include_once_override = 1
I remember there being some sort of confusion the first time I tried setting the extension_dir, so check and make sure that it’s correct for your environment. It might also be /usr/lib/php5/20060613.
There, nothing to it! You should have PHP-FPM running now. The final step is to create the Nginx config files so requests for PHP are passed over to PHP-FPM for processing. This is covered in Part 3, along with some Magento specific Nginx config options.
]]>So what can be done to speed up Magento?
There are a number of things. I have tried almost all of them at this point. And I did reduce the page load time greatly, at least on the front end. But all of the AJAXy scripts on the backend (which has no caching, as far as I know) meant store maintenance and order fulfillment remained a tedious process. So I decided to try one final thing: make the switch from trusty old Apache to Nginx.
I did have Apache2 tweaked out and running quite fast, but Apache has a big memory footprint (especially when running mod_php), and this Magento install is on a 256MB “slice” in the Rackspace Cloud so I want to keep the amount of RAM needed to run the store as low possible without sacrificing performance. Nginx is a really lightweight web server, and when paired with PHP-FPM (“FastCGI Process Manager”, a patch for FastCGI) supposedly it’s the fastest and most memory efficient way around to serve up PHP scripts. Sounds like just what I need!
I got Magento running on Nginx with PHP-FPM and APC and it runs about as fast as my Apache install. I was hoping for a miracle and didn’t get it, but considering it’s on an anorexic little 256MB Cloud Server I would say it is performing admirably. Perhaps with more time to tune the performance (and bumping up to a bigger slice) it would really be fast, but I worked a long time on the previous Apache install so it was a tough act to follow. This article is the first of three that will explain what I did to get Magento running on this stack.
DISCALIMER: I am a pretty serious linux n00b. I’m sure I installed unnecessary packages, added extra compile flags, and what have you. You might bork your server trying this stuff, so don’t do it in a production environment!! However, I did get Magento running on Nginx with PHP-FPM and APC. It was a struggle, but I did it. So I thought I would share my notes in hopes they will help others. I do not claim that these same commands will work for you in your unique environment, and I probably won’t be help troubleshooting when they fail. Also, please add a comment and correct me if anything is wrong here, or if there is a better way to do anything. Thanks!
Here are some of the resources I used to figure this out the first time around:
And here is the start of the compile and install process (note: you might need to sudo these commands):
# aptitude install make bison flex gcc patch autoconf subversion locate libc6 libpcre3 libpcre3-dev libpcrecpp0 libssl0.9.8 zlib1g lsb-base
# mkdir ~/sources
# cd ~/sources/
# wget http://sysoev.ru/nginx/nginx-0.7.63.tar.gz
# tar -zxvf nginx-0.7.63.tar.gz
# cd nginx-0.7.63/
# ./configure --sbin-path=/usr/local/sbin --with-http_ssl_module
# make
# make install
# ln -s /usr/local/nginx/conf /etc/nginx
# /usr/local/sbin/nginx
# kill `cat /usr/local/nginx/logs/nginx.pid`
Then, I created the following init script at /etc/init.d/nginx.
#! /bin/sh
### BEGIN INIT INFO
# Provides: nginx
# Required-Start: $all
# Required-Stop: $all
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: starts the nginx web server
# Description: starts nginx using start-stop-daemon
### END INIT INFO
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/usr/local/sbin/nginx
NAME=nginx
DESC=nginx
test -x $DAEMON || exit 0
# Include nginx defaults if available
if [ -f /etc/default/nginx ] ; then
. /etc/default/nginx
fi
set -e
. /lib/lsb/init-functions
case "$1" in
start)
echo -n "Starting $DESC: "
start-stop-daemon --start --quiet --pidfile /usr/local/nginx/logs/$NAME.pid \
--exec $DAEMON -- $DAEMON_OPTS || true
echo "$NAME."
;;
stop)
echo -n "Stopping $DESC: "
start-stop-daemon --stop --quiet --pidfile /usr/local/nginx/logs/$NAME.pid \
--exec $DAEMON || true
echo "$NAME."
;;
restart|force-reload)
echo -n "Restarting $DESC: "
start-stop-daemon --stop --quiet --pidfile \
/usr/local/nginx/logs/$NAME.pid --exec $DAEMON || true
sleep 1
start-stop-daemon --start --quiet --pidfile \
/usr/local/nginx/logs/$NAME.pid --exec $DAEMON -- $DAEMON_OPTS || true
echo "$NAME."
;;
reload)
echo -n "Reloading $DESC configuration: "
start-stop-daemon --stop --signal HUP --quiet --pidfile /usr/local/nginx/logs/$NAME.pid \
--exec $DAEMON || true
echo "$NAME."
;;
status)
status_of_proc -p /usr/local/nginx/logs/$NAME.pid "$DAEMON" nginx && exit 0 || exit $?
;;
*)
N=/etc/init.d/$NAME
echo "Usage: $N {start|stop|restart|reload|force-reload|status}" >&2
exit 1
;;
esac
exit 0
Now we can easily start and stop Nginx with familiar commands like /etc/init.d/nginx start. To install the init script, do the following:
# chmod +x /etc/init.d/nginx
# /usr/sbin/update-rc.d -f nginx defaults
You should have Nginx compiled, installed, and running now. Yay! Next, Part 2 will be about installing and compiling PHP-FPM. Finally, Part 3 is about setting up Magneto with Nginx and PHP-FPM (including mod-rewrite issues). I will not be covering the other aspects of setting up a Magento server like MySql and sendmail, because these are the same for an Apache stack which has plenty of documentation out on the Internet already.
]]>I have read parts of the official Magento User Guide (and php/architect’s Guide to E-Commerce Programming with Magento) but this book fills a different gap than the others. The User Guide covers all of the features, but it is not a practical guide. (The programming book was great, but did not cover how to actually use Magento). My favorite thing about the Beginner’s Guide is that it showed how to use Magento in a practical, day-to-day way. Being just a “beginner’s guide” it does not cover all of the features, but it covers a wide swath of the important ones. What really impressed me was the way it provided tips on how to deal with and overcome some of Magento’s serious (and occasionally deal-breaking) problems that make the day-to-day administration of an online store difficult.
An example of this are the confusing order “statuses”. Depending on how the order was paid for (PayPal, Check, CC, etc) orders may be marked “Pending” or “Processing” in a confusing way. The book puts in a real effort to make sense of this mess. It even offers a good work-around using the “Hold” feature to overcome this problem. Order fulfillment and management is one of the biggest pain points in Magento, so I’m glad the book took a moment to deal with this, unlike the official User Guide. The book had other good practical tips for setting up Taxes, Shipping, Meta Information, and more.
The content of the book was organized well, covering the setup of a store and all it’s components in the best order for ease and speed. For instance, if you added all of your products before sorting out your Tax Classes you would have a lot of unnecessary work later, but the book smartly walks you through setting up the Taxes first.
The topics covered in this book are (in order):
This is only a beginners guide, so many features of Magento are not covered. This is not necessarily a bad thing, but it’s good to know before purchasing. If you were hoping to learn how to create Coupon Codes, for instance, you would be disappointed. Some of the main topics not covered in this book are:
There are only two places where I think the book fell short of it’s goal as a “beginners guide”, and the first is the speed issue. In defense of the book fixing the speed problem is outside the abilities of a “beginner”, but it is a pretty serious issue that should have been addressed. The book had nothing to say about performance, except a brief mention about the Cache (in the context of turning it off to view changed code). I would like to have seen a few paragraphs about the potential for speed problems, notes on choosing a fast host… something along those lines.
The second complaint I had surprised me: there is no mention of how to customize the emails sent to the customer. The default Magento install sends terrible emails to the customer with things like “555-DEMO” phone numbers, “Magento” alt text, and other sloppy things hard-coded into the email templates that look very unprofessional. You can override these in the “Email Templates” section of the Admin Configuration or hand-edit the files, but the book had no mention of either method. The book did explain how to configure the appropriate email addresses, explained when emails are sent to the customer, and even mentioned how to change the address on the printable invoices and packingslips. But the common problem of editing the email templates was missed. This is my biggest complaint about the book.
Now, back to some praise: this is the first book I’ve read from [packt], so I don’t know if this is the norm or not, but I like how they divided up chapters into sub-sections like the step-by-step “Time for Action” walkthroughs, and the “What Just Happened” recap. They served well to guide the “beginner” through the processes explained in the book, and also to explain the “why” behind the actions. Throughout the entire book there were also nice screenshots which illustrated the “whats” and the “wheres” of the user interface, a very important feature of any software beginners guide. And at the end of the book the step-by-step instructions from all of the chapters were summarized (again with screenshots) as a handy quick-reference.
All in all it’s a good guide to install, configure, and start selling online with Magento. One can, of course, get familiar with using Magento without it. Through trial and error I eventually came to some of the same conclusions the book did about the best way to fulfill orders, name categories, etc. What this book does is provide a shortcut which will hopefully save some of the inevitable frustration of learning new software. It would be especially useful for the true beginner who has never set up an open source software system (like WordPress, Drupal, etc) on a server before, or who has never run an ecommerce store of any kind before.
Magento is a complex and constantly evolving piece of software, so no matter what you will find yourself on Google eventually learning about topics beyond the scope of any book, but as books go this Beginner’s Guide looks like a good place to start.
You can buy it directly from the publisher here, or from Amazon here.
]]>This little code snippet is really basic, and not actually that good or clever. You can make a much nicer slideshow or carousel if you put a little time into it. But I just want to demonstrate how many of the nice JavaScript effects we are used to on the “Web 2.0” are also possible on a Facebook Page via Facebook’s Animate library. It’s not as robust as jQuery or Scriptaculous, but since you can’t import those libraries to a Tab the Animate library will have to do. :)
The final result looks like this:
And you can view a live demo on the CPD Facebook Page
For instructions on how to create your own custom Facebook tabs, read my tutorial here.
To begin, here is the simple FBJS <script> code:
==
var numslides = 7;
var slidesvisible = 3;
var currentslide = 0;
var slidewidth = 147;==
function goright() {
if (currentslide <= (numslides-slidesvisible-1)) {
Animation(document.getElementById('slideshow_inner')).by('right', slidewidth+'px').by('left', '-'+slidewidth+'px').go();
if (currentslide ====== (numslides-slidesvisible-1)) {
Animation(document.getElementById('right_button')).to('opacity', '0.3').go();
Animation(document.getElementById('left_button')).to('opacity', '1').go();
}
if (currentslide < (numslides-slidesvisible-1)) { Animation(document.getElementById('left_button')).to('opacity', '1').go(); }
currentslide++;
}
}
function goleft() {
if (currentslide > 0) {
Animation(document.getElementById('slideshow_inner')).by('left', slidewidth+'px').by('right', '-'+slidewidth+'px').go();
if (currentslide ====== 1) {
Animation(document.getElementById('left_button')).to('opacity', '0.3').go();
Animation(document.getElementById('right_button')).to('opacity', '1').go();
}
if (currentslide > 1) { Animation(document.getElementById('right_button')).to('opacity', '1').go(); }
currentslide--;
}
}
Here is the CSS:
#slideshow_wrapper { width:530px; clear: both; margin-bottom: 20px; }
#slideshow { overflow: hidden; width: 435px; float: left; position:relative; margin-right: 5px; }
#slideshow_inner { position: relative; width: 1250px; }
#slideshow_inner a { opacity:0.7; margin: 0 7px; }
#slideshow_inner a:hover { opacity: 1; }
And finally, here is the markup:
<div id="slideshow_wrapper">
<img id="left_button" src="http:/yoururl.com/images/left_button.gif" onclick="goleft(); return false;" style="float: left; display: block; width: 41px; cursor: pointer; opacity: 0.3;" />
<div id="slideshow">
<div id="slideshow_inner">
<a id="slide1" href="http://yoururl.com/link1" title="Oronaut Outdoor Blog"><img src="<?php echo $basepath ?>images/slide1.jpg" /></a>
<a id="slide2" href="http://yoururl.com/link2" title="Colorado Ski Mountaineering Cup"><img src="<?php echo $basepath ?>images/slide2.jpg" /></a>
<a id="slide3" href="http://yoururl.com/link3" title="PowderBlog"><img src="<?php echo $basepath ?>images/slide3.jpg" /></a>
<a id="slide4" href="yoururl.com/link4" title="Highland Meadows HOA"><img src="http:/yoururl.com/images/slide4.jpg" /></a>
<a id="slide5" href="http://yoururl.com/link5" title="United State Ski Mountaineering Association"><img src="http:/yoururl.com/images/slide5.jpg" /></a>
<a id="slide6" href="http://yoururl.com/link6" title="Circle S Seeds"><img src="http:/yoururl.com/images/slide6.jpg" /></a>
<a id="slide7" href="http://yoururl.com/link7" title="Rita Designs"><img src="http:/yoururl.com/images/slide7.jpg" /></a>
</div>
</div>
<img id="right_button" src="http:/yoururl.com/images/right_button2.gif" onclick="goright(); return false;" style="float: left; display: block; width: 41px; cursor: pointer;" />
<p>Click on an image to leave Facebook and visit the Portfolio.</p>
</div>
(One thing to note when scripting FBJS for Tabs: onLoad does not work. All JS must be started with a trigger event of some sort, even if it is as simple as a mouseOver.)
This is just a starting point. My code is not very generalized, the opacity effects have IE issues (as usual), and there are probably other issues besides. But have fun building on this, and create some sexy animated Facebook tabs!
]]>UPDATE 3-10-2011: Easily create and manage custom Facebook Page Tabs with SplashLab Social – Now available!
The fastest and easiest way to add custom Tabs to a Page is with the Static FBML application. A nice visual tutorial of this method is posted on lorrainesiew.wordpress.com. I will also quickly outline the process here:
It’s hard to beat the Static FBML app in terms of speed and simplicity. The only problem with it is that it really is static. When I tried to add a Comments form with the <fb:comments> tag it threw errors. Other dynamic FBML tags have the same problem. This is fine most of the time since you can do a lot with static images, links and videos. (Web 1.0!) But if you want to have a Comment area or import Twitter or RSS feeds you’ll need a dynamic page. A “dynamic tab” requires it’s own special Facebook Application to be created. Next I cover how to make a Facebook app that adds a Tab to your Page.
To make a “fancy”, dynamic Tab like I did for the Chili Pepper page on Facebook, you need to create a new Facebook Application. A Facebook App is, at it’s core, just a web page hosted on a server which is loaded into Facebook via AJAX. You can make very advanced apps that hook into the Facebook Feed and other things, but all I want to talk about here is making a very simple one that just displays HTML on a Tab. The basic process of creating an App are:
There are many great tutorials that provide detailed instructions on how to do all this, so rather than reinvent the wheel I’ll provide some links to these resources:
All you really need is the official Get Started page, in my opinion. But even with all the tutorials in the world you’ll find yourself searching through the Forums to figure out errors and problems, since the Documentation is just so-so.
Using the resources above you can create a Facebook App. Here are the steps necessary to create a Tab with yourApp and add it to your Page:
You now have a custom App Tab on your Page! Creating a nice Tab with dynamic content is beyond the scope of this post, but I hope to give some short tutorials in later posts about that. To help you start out making your Tab though, here are two things to remember:
if( !isset($_POST['fb_sig_in_profile_tab']) ) { echo "tab only code here"; }
if ($_REQUEST['fb_sig_logged_out_facebook']) { echo "code for logged out users"; }
<fb:visible-to-connection>Welcome, fans!</fb:visible-to-connection>
And finally, for some inspiration for your own custom page check out the great things designers are doing with Facebook Pages over on the Facebook Showcase