Skip Ribbon Commands
Skip to main content
SharePoint
December 05
Red Adair Rings True In IT

“If you think it's expensive to hire a professional to do the job, wait until you hire an amateur.” - Red Adair1

This quote resonated with me when I recently took on a do-it-yourself (DIY) renovation project at my home.  I decided to do the project myself because it appeared fairly straight forward. I had a vision and goal of the finished project in my mind.  I had gone so far as to draw out the plan, itemize the list of materials I would need, and the tools I would require to complete the project.  I was comfortable that I knew what I wanted, knew how to build it, and had access to adequate resources to successfully complete the project.  I began my project with a little demolition work and ended up accidentally over-demolishing the project.  I had to hire a professional to clean up the mess that I made and complete the task.  The end result is that I paid more than I would have, if I had hired the professional to begin with.

A DYI project that could have had better results with the assistance of a professional.When Red Adair first spoke this famous quote, he was referring to his services as a firefighter. I have found that the quote also applies to construction renovation professional services. In my professional experience, I have seen the quote suit Information Technology (IT) projects where organizations embark on the DIY path for their project.  The results are often incomplete projects or projects that fall short of the goal.  In the end, the organization called on professional consulting business to “fix” their project.

In addition to doing the job right the first time, there are other advantages to hiring a professional.  Donald Rumsfeld, United States Secretary of Defense, said “…there are things we do not know we don’t know.” A professional consultant helps organizations identify aspects of their organization or project that they don’t know.  The organization can then leverage or rectify the unknown, depending on what it is.

A professional consultant typically has past experiences and knowledge that can take an organization’s project beyond its initial vision.  While this may increase scope, costs and timelines for your project, it may actually be a creative solution to accomplish the same or more scope while potentially reducing costs and timelines.

In some projects, hiring an independent professional may simply add value by adding a fresh, new perspective.  We have seen projects where organizations are certain they know what they want and simply want to “build it”.  A professional consultant can often help customers step back and revisit the goals and strategies of a project, enabling customers to see ideas they had not thought of, resulting in superior results.

On many occasions, IT professional consultants are hired after failed DIY attempts and results in unnecessarily high costs to the customer. As Mr. Adair said, while an amateur may look less expensive initially, in the long run it is often less expensive to hire a professional to do the job. It is important to understand the value of hiring professional consultants to strategize, design, develop and deploy a business system.

red

Red Adair was an American oil well firefighter born on June 18, 1918, in Houston, Texas. In 1938, he worked his first oil-related job with the Otis Pressure Control
Company, and during World War II he served in the 139th Bomb Disposal Squadron.
In 1959, he formed Red Adair Company, Inc. which provided services to control
oil well fires and blowouts. His company established modern-day effective Wild
Well Control techniques. Adair’s accomplishment gained notice, and in 1968 he
became the Technical Advisor on the film Hellfighters starring
John Wayne. During
his career, he and his company completed over 1,000 jobs internationally
including in 1991's Operation Desert Storm. Adair died on August 7, 2004, in
Houston, Texas.

References:

  1. http://www.brainyquote.com/quotes/quotes/r/redadair195665.html#RO81HKtLgupSkB8F.99
  2. "Defense.gov News Transcript: DoD
    News Briefing – Secretary Rumsfeld and Gen. Myers, United States Department of
    Defense (defense.gov)"
    February 12, 2002.
December 05
My History with Umbraco

Intro

Having worked with Umbraco since 2008, I’ve seen this product evolve into one of the most user friendly and strongest supported Content Management Systems (CMS) I’ve ever worked with. Umbraco is an open source, free CMS that allows users to download it and use for their own needs. It was developed by an individual from Denmark named Niels Hartvig. There is a strong developer community and forum where users can post issues and help each other out. The Umbraco community also contributes packages such as blogs, form generators, back office tools, and much more. Its versatility allows Umbraco developers to offer both quick out of the box solutions, and customized solutions for specific business processes. 


projects resized 600 

How it started

Prior to getting involved with Umbraco, Redengine (my former employer) had their own CMS system called RCM that they spent years developing. Maintaining and upgrading RCM to keep up with other CMS’s required a lot of time and resources. At the time we discovered Umbraco, they were quite small and not very well known. After doing some research we discovered it was a superior CMS relative to RCM in every aspect. User friendly, performance and speed, upkeep and maintenance, and also a free tool with source code available. One of the main Umbraco guys (Paul Sterling) facilitated training for all the employees and after just 1 training session, we were comfortable rolling out websites in Umbraco. We always had a point of contact in case we ran into unique issues, but almost every issue we’ve run into we could find someone with the same issue on the community forum with answers. After committing to Umbraco, the feedback we received from clients was overwhelmingly positive. Almost everyone raved about how easy it was to use, and it made my training sessions so much easier because of how intuitive it was designed and structured. Also having the Umbraco team responsible for bug fixes and updates removed that work and cost from us. Umbraco also has their own documentation on how to do installs, upgrades, and patch updates. All can be found here: http://our.umbraco.org

describe the image 

Community and Codegarden

The Umbraco community is extremely active and it is amazing to see how generous they are with their time and how much they want to help people who either have issues or are just getting started. Last summer I was in Copenhagen (Denmark) for their annual conference called “Codegarden”. During the keynote, they announced that they were scrapping Umbraco 5 and declared it a failed project due to the decision of keeping the development of it in a small group instead of involving the community. Admitting to their mistake, they took feedback from the attendees and collected information on what issues people wanted fixed, and things that people wanted to see in Umbraco 6. The biggest feature that the developers insisted was having Umbraco run in MVC. Using the feedback they received, a Roadmap (http://our.umbraco.org/contribute/roadmap) was created that contained features and target dates leading up to the release of Umbraco 6 and minor versions after. Umbraco 6 was a huge success. It included a new improved data access layer as well as the ability to run the project in MVC rendering mode and a lot of other improvements. 
 

codegarden resized 600 

Codegarden 2012 keynote. If you look at the far left hand side, you’ll see me!

 

codegarden2 resized 600
 

Where Codegarden takes place.

Umbraco 7

Today they are wrapping up Umbraco 6 and ready to release Umbraco 7 (called project Belle) in November, 2013. This version includes a back office redesigned interface with minimal effort to upgrade Umbraco 6 installs to 7. They have also been working with the Microsoft Azure team to develop a solution to have automatic version upgrades and ways of making staging and production deployments easier.

umbraco7

A preview of what Umbraco 7 back office interface looks like.

roadmap resized 600
Umbraco Roadmap

Conclusion

As a developer, the thing I love most about Umbraco is that it doesn’t restrict my ability to do anything. It’s a tool for users to manage content and only adds to what I’m building. It allows developers to build without restraint or restrictions and allows them to be as creative as they wish. It allows designers to design and build sites freely using whatever type of tools or frameworks they are comfortable with. Being open source and having a strong community backing, they get great honest feedback and the Umbraco core team actually listens to their users. It’s amazing to be involved with Umbraco while they were small, and watching them over the last few years evolve to what they are now.

December 05
Prototyping with Twitter Bootstrap

I joined the iomer team midway through the process of a website redesign. The wireframes had already been created by other team members, and I was tasked with developing the site and the visual interface for my first project.  As with many projects, there was a limited budget and a short timeframe. The process would have been too costly and inefficient to develop a separate prototype without any code to develop it further. We wanted to incorporate a more agile design process, and because we knew we'd be doing user testing, we needed a solution that would allow us to test the prototype and make changes quickly. For these reasons, using a front end framework was the right solution. 

Why A Front End Framework?

Front end frameworks can be powerful tools in a designer's development process. It can save a ton of time in the project process, and with most frameworks employing a fluid grid system, scaling to different devices becomes less of a headache. A framework also supports a more agile process, allowing a designer to design and develop a site in a streamlined approach. Twitter Bootstrap recently came out with an updated version of their framework, Twitter Bootstrap 3. There are numerous frameworks available, and after researching a few, Twitter Bootstrap was chosen for its breadth of components and community support. Since this was the first time using a framework, I wanted to ensure that it was well documented and would support all my needs.

A New Process with Bootstrap

Developing In Browser Makes Life Easier

usability testprototype

Gone are the days of pixel perfection in Photoshop with static mock ups. By utilizing a framework, I was able to begin developing a prototype of the site in browser straight from the wireframes, without getting caught up in the visual details. We wanted a prototype built in browser to have the ability to engage end users in usability tests. While this can be done using different prototyping tools, developing in browser assures cross browser compliance and testing on multiple platforms and devices. Using Bootstrap helped assure browser support, especially for IE8+. By getting the site up and running in a matter of days, we could quickly jump into user testing. We travelled to the client's business and set up a usability test station for any patrons that came through. The changes that were needed based on this testing were implemented quickly with a new feature added in a matter of a few hours. Since the prototype was developed in HTML and hosted on a local server all the necessary changes made could be easily viewed by the client.

Responsive Design Feels Like A Breeze

Unless the project scope calls for a separate mobile site, a responsive layout is paramount. However, building a responsive site adds time and cost to a project, easily doubling the amount of design and development required.  The update of Twitter Bootstrap 3 moved the framework to a mobile-first, fluid grid layout. This cuts down on the development side of creating a responsive layout and allows more focus to be put towards content strategy and information architecture on mobile platforms.

The main function of the site focused on a complex form field to request an appointment. Form design can become a complicated mess if not positioned properly at small screen sizes. With the responsive layout provided by Bootstrap, the form floated into an efficient and usable layout for mobile devices. The mobile view of the prototype only required a minimal amount of tweaks to make it just right. By utilizing the responsive utilities, it's easy to change navigation styles for mobile views without needing media queries or JavaScript.

 

Twitter Bootstrap Goodies

A great aspect to Twitter Bootstrap 3 is the amount of built in components a designer can use. The form I had to design was lengthy with lots of content. To break up the amount of content a user engages with at one time, we used collapsible elements to section off form content. All the components supplied are well documented and easy to implement. If Bootstrap doesn't provide a certain component there is a vast amount of third party plugins designed specifically for Bootstrap. The form also required a date picker, and while Bootstrap doesn't natively provide that, multiple other developers have already created it and made it available for others to use. Because these components are designed specifically for Bootstrap, they work seamlessly with little to no adjustments necessary.

Bringing It All Together

The visual design of the site was developed concurrently with the workable prototype. Rather than sending the client a detailed mock-up of what the site would look like (pre-user testing), mood boards were created to present what the look and feel of the site should be. The mood board strategy fits well with the streamlined and agile approach of designing in browser with Bootstrap, and prevented the client from picking apart small details of the visual design without looking at the overall picture. Three mood boards were developed for the client outlining the suggested colours, fonts, image quality, and showcasing other sample sites. Similar to style tiles, a mood board is a bit broader and less concerned with site element details like button styles. After the client selected the mood board they felt was appropriate for their vision, we were given a clear direction of what the visual design should look.

Mood Board

When our testing was done and the structural changes were made, applying the visual style based on the mood board felt effortless. Since the site had already been developed in the browser as a grayscale prototype, applying the visual styling only required a few CSS modifications.

Final Design

Some Downsides to Bootstrap

  1. Class based modifiers: Using classes as modifiers can really hurt in the end. Our original wireframe had the secondary navigation floating to the right. After our usability testing, it was determined that it was more effective on the left. Rather than changing one line in the CSS file, I had to go through every site page and change the class name of "pull-right" to "pull-left".
  2. Bloated CSS file: Unless you customize the CSS from the original download (as you should), your CSS will be HUGE. About 5,000 lines.
  3. Learning to be a better developer: With all the code needed as a simple copy and paste, acquiring the skills to develop cleaner and more efficient code won't come easily. If this is what you strive for, then staying away from a framework like Bootstrap would be best.

Even though I came into this project midway, using Twitter Bootstrap allowed me to jump right in to the process. Developing the testable prototype with Bootstrap and creating the visual design concurrently, yet separately, created a more agile process and great learning experience. In the end, we were able to deliver a solution to the client that looked great and was developed in an efficient manner.

While Twitter Bootstrap is a popular front end framework, there are many others available. Each framework provides their own class naming system and different native components. Regardless of which framework you choose, they can be used to quickly and efficiently prototype and develop your solution.

December 05
Gamification: Coming to a workplace near you!

Video games aren't just a wave of the future; they're the current reality for millions of players who take to their consoles and PCs every day.

Games are undoubtedly one of the more addictive elements of modern life, and their use in the workplace is growing.  Research suggests that the gamification market will be worth approx. $3 billion in the next few years.

Gamified competition in the enterprise workplace

Competitive gamification is certainly becoming a hot new business theme in modern corporate development these days. But that isn't necessarily a bad thing. It has been demonstrably effective in sales, where gaming mechanics are used to promote a "competitive interest" in engaging customers and closing deals. Now, management is exploring other business functions which might benefit from gamification techniques.

Gamification

The allure is easy to understand.  Companies around the world are suffering from falling employee engagement, while at the same time having to come to terms with the financial restraints enforced by the global recession.  Anything that both makes work fun and provides non-monetary incentives is an enticing proposition, and the potential for increased productivity only makes it more attractive.

There are numerous examples of gamification in the workplace that are achieving real results.  At Target for instance, they have made the checkout process more like a game.  Each time a cashier checks someone out, they're playing a game - a red light tells them they're too slow in scanning an item, green says they're bang-on.  A real time score is then provided to reflect their performance.

A well designed game makes sure everyone is having fun, even in competition

Well designed Gamification

In the corporate environment, motivation can be viewed as the process of engaging employees and encouraging them toward progress and achievement, to foster cooperation and collaboration and, by doing so, to improve themselves and the company they work for.

Though game mechanics can be used to motivate employees and promote the behaviors that the company wants to see, each initiative should be well thought-out and designed with the most effective elements. Arbitrary use of game elements modeled on competition may be useful for short term sales initiatives, but may be disruptive and anti-productive in the long term.

Instead of taking the zero-sum approach to try and motivate top performers, we should consider strategies which bring individual strengths together to produce a more effective corporate team. That formula will usually outperform the individualistic paradigm. It will help preserve and improve a positive corporate culture, support and encourage the development of talent and skills, and increase competitive strength where it really matters –the marketplace.

The key characteristics of adaptive competition have much in common with John R. Wooden's oft-quoted definition of success:

"Success is peace of mind which is a direct result of self-satisfaction in knowing you made the effort to become the best of which you are capable".

December 05
Knowledge of Possibilities

Recently, I read a great blog post by Daniel Burrus which states "Give your customers the ability to do what they can't currently do but would want to if they only knew it was possible." The key here is knowledge of possibilities and knowledge which comes through experience and education. It's an organization's responsibility to educate their customers by showing the value for their products and services, so that the possibilities are known.  For this to happen, it's important to know what will bring value to the customers; have a good understanding of customers' needs.

Take, for instance, the OpenData initiatives going on internationally (UK, US, NZ) and nationally (Canada, Alberta, Edmonton).  Government departments throughout the world have opened the doors for innovation by moving forward with the Open Government  initiative, exposing government information on data portals. Governments are aiming for the IT community to use this data to engage citizens. There are thousands of datasets (collection of data in tabular format) released for citizens' and IT community's utilization.

David Eaves (an Open Government thought leader) recently wrote an article about the re-launch of OpenData portal by the Government of Canada (Data.gc.ca). He refers to the additional datasets in the following words:

"… a lot more data is likely going to get into that portal over the next 2-5 years. And a tsunami of data could end up in it over the next 10-25 years. Indeed, so much data, that I suspect a portal will no longer be a logical way to share it all."

David Eaves further discusses this problem in terms of the procurement process government has to deal with:

"There is potentially a tremendous amount at stake in how government handles the procurement side of all this, because whether they realized it or not, it may have just completely shaken up the IT industry that serves it."


In other words, the IT industry which makes up the majority of the innovators who would be using OpenData will be dealing with a tsunami of data. They will also deal with questions like where to invest? Which dataset should they consider? How much value can they get for their efforts?

Government is trying to attract two sets of users: IT community and Citizens. Government is facilitating the IT community to attract Citizens, but how would the IT community know what's important to citizens? Without having that knowledge, it's hard to build systems that can produce value to citizens. Is there a position for government to facilitate that communication? Government is looking into some high-value data, but it's possible that government is making assumptions on what citizens need. This could result in a vast amount of wasted effort. What can the IT community do with the help of Government to get a good understanding of citizens' needs? How can government let the IT community know of the possibilities with OpenData?

In the past, I've used a user engagement model to help clients understand how design can help sell the value of information.  I've extended that model (Fig. 1.0) to encompass OpenData portals in order to help think of a possible solution. The underlying message is that the IT community does need to invest in understanding citizens' interest since citizens are their customers. However, government plays a vital role in facilitating that communication.

 

 C E M

 

As shown in the above example, Mark plays the role of the IT community; drawing on the value built by citizens such as Bob. Mark realizes that he can now invest in expanding his understanding of citizens needs based on existing interest that is built. He can now leverage datasets and develop an app that others can use and from which they can benefit.   Given that the model above has seven levels (0-6), it's the top most level (the contributive engagement level) where a developer/entrepreneur can better meet citizens' needs and add additional value. This is where government comes in: to help achieve that level of engagement, government must establish communication, helping citizens progress through the first six levels.

We see this kind of engagement built into the current City of Edmonton portal, where data is telling a story to grab citizens' interest. For example, maps of street construction project that will help notify citizens of traffic jams to avoid on their commute route. Based on that interest, value for data can be leveraged by the IT community to provide additional value.

For both the IT community and government to partner up and engage citizens, it's important to follow five steps (which will be expanded further in later blog posts):

  • Get a good understanding of what citizens really need
  • Get citizens' attention by telling stories through data
  • Sell the value of information
  • Let users share the value
  • Expand and add to that value

Government has faith that the IT community will leverage their knowledge of technology to use the OpenData to engage citizens. However, the government needs to facilitate the communication between the IT community and the citizens and let them know of the possibilities —possibilities that are unlimited in case of Open Government.

December 05
Integrating AJAX and Telerik with SharePoint

​To extend your SharePoint site to include Ajax, you will need to perform a few steps

  • Download and install ASP.NET AJAX on your server farm
  • Extend the web.config file with some settings to enable AJAX
  • add the AJAX script manager to your master page to enable Extenders or UpdatePanels

Install AJAX on the servers in your farm

Go to ajax.asp.net to install the AJAX extensions to your system

Extend the web config files for AJAX

Entending Sharepoint web.config files to include AJAX requires that you interleave some Ajax registration entries with WSS registration entities.

Add a <sectiongroup> element to the <configsections> tag

<sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
<section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" />
<sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
<section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" />
<section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere" />
<section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" />
<section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" />
</sectionGroup>
</sectionGroup>
</sectionGroup>

 

Add some new Registrations to the end of the <httpHandlers> section

<add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
<add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
<add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false" />

Create a WebPart using STSDEV

StsDev is a utility that creates a webpart project for VisualStudio It automatically sets up the build and deployment macros and the folders and file paths required to build a Sharepoint package suitable for installation on a server farm.

Install a script manager on the page

Many components of Ajax require a script manager to be availabe on the page. That script manager can be added to the masterpage or it can be added dynamically from within the web part. To add a script manager dynamically, it must be done early enought to be of value to your control but added at all if there is already on on the page. This can be on on from the OnInit method

First we need to declare some variables

private DataTable DataSource1, DataSource2;
RadGrid RadGrid1 = new RadGrid();
public SummaryCosts sum;
private string html;
protected Telerik.Web.UI.RadAjaxLoadingPanel ralp;
protected Telerik.Web.UI.RadAjaxManager ram;
protected Panel msg;

private ScriptManager _AjaxManager;
[Microsoft.SharePoint.WebPartPages.WebPartStorage(Storage.None)]
public ScriptManager AjaxManager
{
get { return _AjaxManager; }
set { _AjaxManager = value; }
}

protected override void OnInit(EventArgs e)
{
base.OnInit(e);
ScriptManager scriptManager = ScriptManager.GetCurrent(this.Page);
if (scriptManager == null)
{
scriptManager = new ScriptManager();
scriptManager.ID = "ScriptManager1";
this.Page.Form.Controls.AddAt(0, scriptManager);
}

RadAjaxManager ram = RadAjaxManager.GetCurrent(Page);
if (ram == null)
{
ram = new RadAjaxManager();
ram.ID = "RadAjaxManager1";
Page.Form.Controls.Add(ram);
Page.Items.Add(ram.GetType(), ram);
}

EnsureChildControls();
}

 

Add UpdatePanels to SharePoint

Update Panels are a useful addition to AJAX and are the simplest way to convert an existing ASP.NET control to take advantage of Ajax techniques. The following code is added to reset the WSS JavaScript "form onSubmit wrapper" behaviour.

private void EnsureUpdatePanelFixups()
{
if (this.Page.Form != null)
{
string formOnSubmitAtt = this.Page.Form.Attributes["onsubmit"];
if (formOnSubmitAtt == "return _spFormOnSubmitWrapper();")
{
this.Page.Form.Attributes["onsubmit"] = "_spFormOnSubmitWrapper();";
}
}

ScriptManager.RegisterStartupScript(this.Page, typeof(Page), "UpdatePanelFixup", "_spOriginalFormAction = document.forms[0].action; _spSuppressFormOnSubmitWrapper=true;", true);
}

 

Create the RadGrid control

Most of the work is done in the CreateChildControls method (naturally). Here is where we open the database, create our datatables of information to display within the RadGrid, create the RadGrid and any other panel to be displayed. We also create an AjaxLoadingPanel and link the two tables together inside the RadGrid control.

protected override void CreateChildControls()
{
sum.opendatabase();
DataSource1 = sum.getCustomers();
DataSource2 = sum.getOrders();

msg = new Panel();
msg.EnableViewState = false;
msg.Controls.Add(new LiteralControl("Customer Orders"));

RadGrid1 = new RadGrid();
RadGrid1.DataSource = DataSource1;
RadGrid1.MasterTableView.DataKeyNames = new string[] { "CustomerID" };

RadGrid1.Width = Unit.Percentage(98);
RadGrid1.PageSize = 50;
RadGrid1.AllowPaging = true;
RadGrid1.PagerStyle.Mode = GridPagerMode.NextPrevAndNumeric;
RadGrid1.AutoGenerateColumns = false;
RadGrid1.Skin = "Web20";

RadGrid1.MasterTableView.PageSize = 50;
RadGrid1.MasterTableView.Width = Unit.Percentage(100);
GridBoundColumn boundColumn;

boundColumn = new GridBoundColumn();
RadGrid1.MasterTableView.Columns.Add(boundColumn);
boundColumn.DataField = "CustomerName";
boundColumn.HeaderText = "Customer Name";

GridTableView tableViewOrders = new GridTableView(RadGrid1);
RadGrid1.MasterTableView.DetailTables.Add(tableViewOrders);
tableViewOrders.DataSource = DataSource2;
tableViewOrders.Width = Unit.Percentage(100);
GridRelationFields relationFields = new GridRelationFields();
tableViewOrders.ParentTableRelation.Add(relationFields);
relationFields.MasterKeyField = "CustomerID";
relationFields.DetailKeyField = "CustomerID";

boundColumn = new GridBoundColumn();
tableViewOrders.Columns.Add(boundColumn);
boundColumn.DataField = "OrderNumber";
boundColumn.HeaderText = "Order Number";
boundColumn = new GridBoundColumn();
tableViewOrders.Columns.Add(boundColumn);
boundColumn.DataField = "OrderDate";
boundColumn.HeaderText = "Order Date";

ralp = new RadAjaxLoadingPanel();
ralp.ID = "RadAjaxLoadingPanel";
this.Controls.Add(ralp);

this.Controls.Add(RadGrid1);
this.Controls.Add(msg);
}

 

In this code we create a RadGrid control with a Master Table/Detail Table view. The Master Table has the Customer Name as the bound column (what we are going to show). Then we create a new view (called tableViewOrders) that has the Orders table associated with it. The MasterKey/DetailKey are declared to both be the CustomerID. In the Order table we bound te Order Number and the Order Date to the control. The last thing we do is create a Loading Panel for the control.

 

PreRender and Render processing

Just before we render the control we need to associate the script manager with the loading panel and the RadGrid that we have created.


protected override void OnPreRender(EventArgs e)
{
base.OnPreRender(e);
RadAjaxManager manager = RadAjaxManager.GetCurrent(Page);
if (manager != null)
{
manager.DefaultLoadingPanelID = ralp.ID;//assign default loading panel

//add ajax setting
manager.AjaxSettings.AddAjaxSetting(RadGrid1, RadGrid1);
manager.AjaxSettings.AddAjaxSetting(RadGrid1, msg);
}
}

 

Finally, the last thing to do is render out the control and anything else we may want to display.

 

protected override void Render(HtmlTextWriter writer)
{
EnsureUpdatePanelFixups();
base.Render(writer);
}

The result is a control that will dynamically display the customer name and their orders within a simple display without postbacks.

TelerikControl
December 05
Website Sustainability

Content LifecycleWebsite sustainability may sound like a complex topic but the concepts are quite straightforward and are fundamental to successful website development and growth. With any web property, sustainability issues can be identified and, with proper planning and implementation, avoided. The result is a website that will flourish and grow alongside your business.

Key Issues

During the initial design (or redesign) and creation of a website, time and effort are invested in ensuring that content is well organized, navigation is straightforward and, frankly, that the website is the best that it can be. As time passes, the site is at risk of becoming outdated, unwieldy and out of line with business goals and objectives.

The following scenarios are common in websites that have not undergone sustainability planning.

  • Organizational growth or change has not been reflected in the website, resulting in structure and content that is not relevant or does not reflect the goals and objectives of the business.
  • Stale or invalid content is not monitored and removed, resulting in a reduced level of trust in the website as a source of accurate information.
  • Content management and governance principles are not properly utilized, resulting in more content than the operational team can reasonably manage.
  • Insufficient testing with evolving browser technologies results in a site which is outmoded or no longer functioning.

The goal of a sustainability plan is to outline steps of action to take to eliminate the occurrence of the above issues and create a long-term foundation for success with the website.

Ongoing Activities

To support sustainability, the following activities should be scheduled and performed on a regular basis.

Content Review

Content should be regularly reviewed and stale or irrelevant content can be archived or removed. This activity should include testing external links to ensure they are still valid.

Information Architecture Review

Content organization and navigation may need to evolve with changes in business goals and objectives, new features or concepts, or organizational updates. The IA review process may include activities such as card sorting or tree testing.

User Feedback Collection

Engaging with users should be an ongoing activity. By engaging users to provide feedback, ideas are generated with minimal effort from the operational team, user frustrations are alleviated and satisfaction with the company and the brand is increased. Most importantly, people like to be heard and feel like their opinions are being considered.

There are many different possible methods for gathering feedback but it does present the opportunity to get creative.

  • Use a video to solicit comments and suggestions. This adds an element of interactivity and a personal face to the website.
  • Post an announcement or news item to advertise the feedback-gathering method. This is less intrusive than a pop-up message, which many people will appreciate.

The feedback gathering process may also include activities that involve more direct interaction with users, such as interviews or focus groups.

Usability Testing

Possible usability testing activities may include:

  • Time-on-Task Performance
  • Task Success
  • Contextual Inquiry
  • Heuristic Evaluation

Browser Update Testing

Browser technology is constantly changing and evolving and the website should be regularly tested to ensure ongoing compatibility. The site should be tested in old and new iterations of commonly used browsers. There are free website services such as Browser Shots (http://browsershots.org/) that allow you to test your site in many browsers at once.

Site Usage Statistics and Search Logs Analysis

Site usage statistics and search logs give you a direct look at how people are using the site, which content is most commonly accessed, and how people are searching for information. The results of the analysis can identify areas of improvement in terms of featuring or highlighting content, organization and navigation, and can assist with search configuration/changes to keywords and best bets.

December 05
3 Tools for Clearer, More Effective Writing

 
Writing for the WebNeed to convey a wealth of information in a tiny space? When working in the technical world, writers are constantly torn between the need to accurately describe complex ideas and the desire to make them clear and accessible. Too often one of these principles is sacrificed in order to satisfy another. To avoid falling into this trap, the right knowledge is necessary.

Arm yourself with these 3 tools and you’re sure to make your point quickly, clearly and memorably.

  • Vocab Grabber – Have a nagging suspicion that you’ve used a word too often in your document? Sure you have! I used the word “writing” twenty times in the original draft of this post alone. A new tool from the people behind the Visual Thesaurus (an excellent tool in its own rights), Vocab Grabber allows you to analyze the vocabulary used in large chunks of text. It supplies a concise breakdown of the type, number and frequency of words used in a text block, all graphed and mapped in quick, easy-to-read formats. Definitions, examples and synonyms are all a click away in this handy site, enabling you to rid your writing of the dullness of redundancy in seconds.

  • The OWL at Purdue – The least flashy of the three resources I’ll mention here, the OWL (Online Writing Lab) at Purdue is nonetheless a consistently valuable resource in producing polished, well-formed writing. Covering a wide breadth of topics, I find myself constantly drawn back here to reference their tips, tricks and starting points, for everything from technical composition to email etiquette. These articles can be a great help in communicating clear and concise messages. Additionally, this site offers in-depth grammatical reference (try searching for “commas” or “apostrophes” in Google).

  • Poynter Online: Fifty Writing Tools– A list of 50 writing tips (in podcast form) from Roy Clark of the Poynter Institute for Media Studies. These tips, excerpted from his book "Writing Tools: 50 Essential Strategies for Every Writer", are a collection of insightful techniques that can enrich and inform your writing. Clark’s suggestions, wrought from the journalistic world, can be surprisingly effective when transplanted into a business or technical forum. These approaches can aid in adding weight to your opinions and are particularly effective when applied to business emails or executive summaries. This is a site I revisit any time I have to write succinctly about thoroughly complex topics and need to keep the reader’s attention focused.

There are tons of resource materials online for aspiring writers today and a bit of digging can unearth new and informative sources to improve your knowledge. Mastering these tools will tighten and tune your compositional skills, allowing you to both captivate your audience and elucidate your topic. When writing documents for clients, I’ve found that these are powerful resources for imbuing my work with a clarity and accessibility that makes it stand out.
December 05
SharePoint 2010 Data Access with LINQ to SharePoint

In SharePoint 2007 a popular and (relatively) straight-forward method of querying SharePoint Foundation data from lists was using the SPQuery class and involved writing the query using the Collaborative Application Markup Language (CAML). Unfortunately writing CAML XML is a process that is typically error-prone and most developers rely on tools to auto-generate the query, such as the U2U CAML Query Builder. The result is that large portions of these CAML XML-strings are embedded in the solution source code. This is problematic if the structure of the lists being queried is changed as these CAML queries will break without any notification -- Compiling the SharePoint solution doesn’t check the validity of the embedded XML-string against the columns or field-types of the target SharePoint list(s).

SharePoint 2010 introduces the LINQ to SharePoint provider to allow developers to write queries using the LINQ syntax to query SharePoint lists. The immediate advantage is that developers are now writing their queries against strongly-typed entity classes -- Using Visual Studio’s integrated intellisense and the compiler warnings for field existence and type-checking significantly reduces the time taken to debug query errors introduced when list structure is changed.

Most developers are familiar with LINQ syntax and the official MSDN article-set Managing Data with LINQ to SharePoint (http://msdn.microsoft.com/en-us/library/ee537339.aspx) is a good introduction to the basics. The purpose of this article will be to discuss some of the lessons-learned surrounding generating DataContext classes using the SPMetal and how to effectively use the resulting provider and entity classes within a custom SharePoint application.

Generating a DataContext for a Subset of Lists

Using SPMetal to generate a DataContext class by default will generate entity classes for all lists located on the target site. When the architecture of your SharePoint portal involves multiple sub-sites with different custom lists residing within each sub-site this can often cause duplication when generating your DataContexts. For instance, generating a DataContext for SubSiteA might result in entity classes Announcement, Page, Task and Portfolio while the subsequent generation of a DataContext for SubSiteB would include Announcement, Page and Vehicle – Note the duplication of the entities Announcement and Page between both the DataContext classes, when ultimately the point is to have two DataContext classes that only contain entities Portfolio and Vehicle, respectively. This can be accomplished by passing an XML configuration to the SPMetal command that specifies the lists to be generated in the resulting DataContext.

For example, to generate a DataContext for the Portfolio entity from SubSiteA:

Where the Portfolio.xml is the following:

The configuration XML should be self-explanatory – It contains an element that specifies to include the list titled Portfolio in the output (), and then another element indicating that all other lists located in the web should not be processed (). Running this command yields the desired result of a PortfolioDataContext containing the single entity class Portfolio.

Using configurations to separate list entity classes into different DataContexts results in reduced duplication of classes as entities relating to common out-of-the-box SharePoint lists (Announcements or Tasks) won’t appear in each generated DataContext and can be separated into their own specific DataContexts that can be re-used across sites. For instance, the project’s code-base could include a single PageDataContext class that can be used for querying Page list items across every sub-site. Furthermore, developers can remain productive by generating separate DataContext classes targeting lists on the sub-site their code interacts with.

Getting Human-Readable Choice Field Values

If a list contains one or more columns that are of the type Choice (or Multi-Choice) then SPMetal will generate strongly-typed enumerations representing the valid set of values that can be assigned to the associated entity.

Consider the following enumeration generated from a PortfolioType Choice column with a valid set of values Modern Art, Depressing Poetry and Literature:

Each of these generated enumeration always include values None and Invalid followed by the remaining set of valid values that can be assigned to the Choice field. Integer values assigned to each enumeration value are a power of 2 and enables developers to use bit-wise operations to represent Multi-Choice values and perform comparisons (Note: you will encounter issues if your choice field contains more than thirty options as overflow will occur and SPMetal will ‘wrap-around’ the generated values). Furthermore, enumeration values are decorated with a ChoiceAttribute attribute whose value represents the human-readable display text it represents (as spaces and special characters are stripped in the enumeration value name).

When developing custom views against a list a common requirement is binding a UI element, such as a dropdown, radio-button or checkbox list, to the valid set of options for a lookup field. Since SPMetal generates Choice fields values in the format shown above we can create utility methods for retrieving the human-readable display text that can be used as a data-source for data-bound controls. The following utility methods use reflection to inspect enumeration values to extract their corresponding display text. Note that the following examples reference the PortfolioType enumeration from above. You’ll need to include the following import declarations to use these methods:

GetAllChoiceAttributeValues

Example:

Results:

GetChoiceAttributeValues(IEnumerable values)

Example:

Results:

GetChoiceAttributeValue(T value)

Example:

Results:

LINQ to SharePoint and RunWithElevatedPrivileges

Most developers will be familiar with the method SPSecurity.RunWithElevatedPrivileges for executing code through the privileges assigned to the App Pool account. When developing custom views or WebParts there’s usually the need to allow the current user (or an anonymous user) to access the details of list items that are restricted to a security group they are not part of. Wrapping data retrieval or commands in this method call will by-pass the security context assigned to the current user and interact with lists in an unrestricted way.

When moving from CAML queries to the SharePoint LINQ DataContexts we would expect to be able to re-use this same method to elevate the permissions, causing the DataContext to execute in an un-restricted manner when querying lists. However an obscure problem exists for the DataContext class – If the SPContext.Current property has been assigned then it will implicitly execute all of its queries using that context, regardless of whether the queries are wrapped in RunWithElevatedPrivileges.

This issue has been identified an discussed on a number of blogs (a very good explanation can be found on Joe Unified, http://jcapka.blogspot.com/2010/05/making-linq-to-sharepoint-work-for.html), with the source of the problem being in the SPServerDataConnection class that manages the connection to the underlying SPWeb queries will be run against. If the SPContext.Current has been assigned a value it naively accepts this as the execution context – Because no new SP* objects are initialized the associated SPSite and SPWeb objects aren’t set to run under the App Pool account and instead run under the current user.

As some clever developers have figured-out the SPServerDataConnection class can be coerced into initializing new SP* objects by clearing the current HttpContext before instantiating the LINQ to SharePoint DataContext classes. The steps are as follows:

  1. Create a backup reference to the current HttpContext;
  2. Clear the HttpContext (this ensures that the SPServerDataConnection class will initialize new SP* objects when any DataContext classes are constructed);
  3. Run the code by calling the existing SPSecurity.RunWithElevatedPrivileges method;
  4. Re-assign the backup reference of the original HttpContext.

And the resulting code is:

Returning to the example of the Portfolio list located on SubSiteA this utility method can be called in the following manner (to return all Portfolio items of type ‘Literature’).

Conclusion

LINQ to SharePoint is an exciting technology which enables developers to use .NET 3.5’s LINQ syntax to quickly create data access solutions that connect to SharePoint 2010. However, as the technology is still relatively new there are some nuances that could prevent a person or team from adopting it as easily they would have in a non-SharePoint environment, such as LINQ2SQL or Entity Framework. Relying on SPMetal and its configuration options to generate the DataContext and related entity classes, and the representation of Choice fields and potential ways to data-bind against the values were discussed with some implementation suggestions. As well a resolution to the security context issue when dealing with a situation where list data needs to be queried in an anonymous context was presented. This certainly isn’t the breadth or depth of any LINQ to SharePoint discussion, but I hope it has been informative and can aid developers in understanding and embracing this new technology.

Happy coding!

References

  • Managing Data with LINQ to SharePoint, http://msdn.microsoft.com/en-us/library/ee537339.aspx
  • Using LINQ to SharePoint, http://msdn.microsoft.com/en-us/library/ff798478.aspx
  • Reference Implementation: SharePoint List Data Models, http://msdn.microsoft.com/en-us/library/ff798373.aspx
  • Making Linq to SharePoint work for Anonymous users, http://jcapka.blogspot.com/2010/05/making-linq-to-sharepoint-work-for.html
December 05
Intranet Roles and Responsibilities

In order to manage a sucessful intranet, users must be certain of what tasks they are responsible for on the site.  The roles for intranet users can be seperated into seven categories; these are described below.

Intranet Owners (Intranet Sponsor)

The intranet owners are responsible for the final say on decisions relating to the intranet. These individuals must be responsible for ensuring that budgets are available for the projects as defined by the Intranet Steering committee. They must also be available to resolve issues within the steering committee. The intranet owners should meet quarterly BUT as mentioned above, the need to make themselves available to resolve issues regarding the intranet.

The intranet owners groups should consist of a group of individuals with decision making authority in the organization. This could be the VP of Communications (since the intranet IS a communication tool) and the VP of Information Technology, for example. The intranet owners groups should also consist of an intranet webmaster. The position of intranet webmaster must be assigned as a full time job within your company. It's especially important to realize that an intranet portal is not a one-time project that's finished once it launches. The person in charge of the portal needs to stay on the job after launch or the intranet will suffer from portal decay. The role of intranet webmaster does not necessarily need to be filled with a technical resource. This individual will need to understand how to run the intranet, yes, but does not need to understand how the intranet was built. And individual with a communications or marketing background would fit well into this role, it is important that this person is trained on how to run the intranet.

Intranet Steering Committee

The intranet steering committee will be charged with maintaining the mandate of the intranet. It is suggested that this committee meet on a quarterly basis to focus on the following issues:

  • Establish a Guiding Principle;
  • Define and enforce an intranet mandate and vision;
  • Establish an Intranet Business Model;
  • Create Publishing Policies and standardization; and
  • Project prioritization and budgeting.

The intranet steering committee should be comprised of individuals from a cross-section of the company. These individuals should also have the authority for decision making within an organization.

Intranet Working Group

While the steering committee will be the portal’s key ambassadors and advocates, it is also important that a larger set of "Portal Champions" be created throughout the organization. These employees will initially be responsible for advocating adoption of the portal within the organization, and will partake in specialized training – to create a "train-the-trainer" environment within the enterprise.

From a longer-term vision, the champions, or other designates, will be responsible for maintaining the integrity of information on their department or company team sites and resource centers, essentially taking on the role of "site manager". Members of the Intranet Working Group will take their instruction on direction and policies from members of the Intranet Steering Committee. The members of this committee will meet monthly to discuss such topics as the effectiveness of particular features, the adoption of the intranet by members of their departments or teams as well as compiling suggestions of future enhancements or projects that they hear members of their team may be interested in. These suggestions will be passed on to the Intranet Steering committee for discussion during their meetings.

The intranet working group, ideally would be made up of volunteers who have an interest in technology. These individuals must be given the permission to spend a small amount of time during their work week to deal with intranet related issues and training opportunities.

Content Owners

While these individuals are not responsible for entering content into the system, they are responsible for ensuring that this content is current, accurate and meet the standards as defined by the Intranet Steering Committee. These individuals will be selected based on the role they play in the organization and content that they are assigned to will be related to that role. Each business unit throughout your company should have at least one content owner assigned to it.

Content owners will also be responsible for assigning permissions to the content they are responsible for. This task has typically been the responsibility of the IT group. These tasks can severely bog down the IT department which can sometimes lead to a slow turn around on permission requests. The tools are available to decentralize these tasks, and they should be.

Content Authors

These individuals report to the content owners and are responsible for typing (or copying) the content to the intranet. Content may have an approval process assigned to it or it may go live as soon as the content author has posted it.

Intranet Visitors

The reason the intranet exists. Make sure you understand your visitor’s needs and that the content you are posting to the intranet is relevant to the visitors. There are many methods of targeting content to different types or groups of visitors. There are tools available to poll or survey visitors to understand what types of information they would like to view on THEIR intranet and how they expect to see it organized. Consult with professionals experienced in Information Architecture to ensure the site is setup correctly… or visitors wont visit.

Information Technology

The information technology group will play a role in the upkeep and upgrade of the intranet. Their day to day roles and responsibilities will include assigning access to the intranet as well as ensuring that the site stays live and accessible. The Information Technology group will also be responsible for implementing many of the upgrade projects that are defined and approved by the intranet steering committee.

About this blog
No, this isn't actually my picture. I just haven't gotten around to updating this section. It's good to know that someone is reading every last word though. Thanks!