SharePoint Madness

All about SharePoint and Office365

Archive for February, 2013

16 Key facts on User Authentication methods in SharePoint 2013

Posted by Amit Bhatia on February 7, 2013

I have been working on planning the user authentication methods on SP 2013 and came across few facts which may prevent few headaches later while implementing the user authentication in SP 2013.

  1. Office Web Apps can be used only by SharePoint 2013 web applications that use claims-based authentication –  Office Web Apps rendering and editing will not work on SharePoint 2013 web applications that use classic mode authentication. If you migrate SharePoint 2010 web applications that use classic mode authentication to SharePoint 2013, you must migrate them to claims-based authentication to allow them to work with Office Web Apps.
  2. SharePoint 2013 also supports anonymous authentication – Users can access SharePoint content without validating their credentials. Anonymous authentication is disabled by default. You typically use anonymous authentication when you use SharePoint 2013 to publish content that does not require security and is available for all users, such as a public Internet website. In addition to enabling anonymous authentication, you must also configure anonymous access (permissions) on sites and site resources.
  3. In Forms based authentication, credentials are sent in plain-text format – You should not use forms based authentication unless you are using Secure Socket Layer (SSL) to encrypt the traffic.
  4. Active Directory Federation Services (AD FS) 2.0 is a SAML token-based authentication environment
  5. Kerberos authentication improves performance and page latency – Kerberos requires the least amount of network traffic to AD DS domain controllers. Kerberos can reduce page latency in certain scenarios, or increase the number of pages that a front-end web server can serve in certain scenarios. Kerberos can also reduce the load on domain controllers.
  6. Kerberos should not be used in internet facing deployments – Kerberos authentication requires client computer connectivity to a KDC and to an AD DS domain controller.
  7. In mutiple SAML based authentication providers scenario you can only use one token signing certificate in a farm – This is the certificate that you export from an IP-STS and then copy to one server in the farm and add it to the farm’s Trusted Root Authority list. Once you use this certificate to create an SPTrustedIdentityTokenIssuer, you cannot use it to create another one. To use the certificate to create a different SPTrustedIdentityTokenIssuer, you must delete the existing one first. Before you delete an existing one, you must disassociate it from all web applications that may be using it.
  8. No need for Single affinity in Load balanced Scenarios in SP 2013 – You no longer have to set network load balancing to single affinity when you are using claims-based authentication in SharePoint 2013
  9. People Picker search functionality does not work if the web application uses SAML based authentication – When a web application is configured to use SAML token-based authentication, the SPTrustedClaimProvider class does not provide search functionality to the People Picker control. Any text entered in the People Picker control will automatically be displayed as if it resolves, regardless of whether it is a valid user, group, or claim. If your SharePoint 2013 solution uses SAML token-based authentication, plan to create a custom claims provider that implements custom search and name resolution.
  10. Claims based authentication can have multiple authentication providers in a single zone
  11. Webapplication can only be created with Powershell for Classic mode in SP 2013
  12. Classic Mode authentication can only support one type of authentication per zone – Classic Mode only uses Windows authentication mode.
  13. Forms based and Windows based  authentication can only be used once in a multiple authentication method in a single zone
  14. Atleast one zone must be configured to use Crawl – Crawl component can only use NTLM based authentication. If NTLM authentication is not configured on the default zone, the crawl component can use a different zone that is configured to use NTLM authentication.
  15. Default zone should always be used for most secured settings –  The most secure authentication settings are designed for end-user access. End-users are most likely to access the default zone.
  16. Keep the zones to a minimum – Each zone requires an IIS website and adds overhead.

Posted in Authentication, SP2013 | Tagged: , , | 2 Comments »

Content Enrichment Service – For finer SharePoint 2013 Customization Search experience

Posted by Amit Bhatia on February 6, 2013

There has been some new features in SharePoint 2013 Search . So, this time I come up with a new topic on Content Enrichment Service and how it makes SharePoint 2013 search a pleasent experience.

What is a Content Enrichment web service?

Content enrichment web service callout in SharePoint 2013 enables developers to create an external web service to modify managed properties for crawled items during content processing. The ability to modify managed properties for items during content processing is helpful when performing tasks such as data cleansing, entity extraction, classification, and tagging.

Here are some examples of what you could do:

  • Create new refiners by extracting data.
  • Calculate new refiners based on managed property values.
  • Set the correct case for refinable managed properties.

Content Processing Engine

The content enrichment web service is a SOAP-based service that you can create to receive a callout from the web service client inside the content processing component. The content processing component receives crawled Properties from the crawler component and outputs managed properties to the index component. it is important to note that the web service callout can only read managed properties. Any crawled property value that the web service needs as input must first be mapped to a managed property. The web service callout can only access managed properties that exist before the web service callout, and not managed properties that are set further down in the flow. The web service callout can pass managed properties back to the flow, but only if they are a part of the Search schema.

In our example we have list of books with fields such as book title, author, written year, Publisher and other fields. The book title are not in proper casing as some of the items in the list have lower case titles and some titles have upper case titles.

Step 1: Create the web service

Here is a web service that read the books list and create some manged properties as refiners.

For a basic implementation, do the following:

  1. Include the Microsoft.Office.Server.Search.ContentProcessingEnrichment.dll located in C:\Program Files\Microsoft Office Servers\15.0\Search\Applications\External in your project as a reference.
  2. Implement IContentProcessingEnrichmentService as a web service.

using System;
using System.Collections.Generic;
using System.IO;
using Microsoft.Office.Server.Search.ContentProcessingEnrichment;
using Microsoft.Office.Server.Search.ContentProcessingEnrichment.PropertyTypes;
using System.Globalization;
using System.Threading;

namespace BookService

    public class BookService : IContentProcessingEnrichmentService
        // Define variables to hold the managed properties 

        private Property<string> newBookType = new Property<string>();
        private Property<DateTime> NewDateTimeBookModified= new Property<DateTime>();
        private readonly ProcessedItem processedItemHolder =
            new ProcessedItem
            ItemProperties = new List<AbstractProperty>()

        public ProcessedItem ProcessItem(Item item)
            // Iterate over all managed properties passed to the web service.
            foreach (var property in item.ItemProperties)
                var s = property as Property<string>;
                if (s != null)
                    CultureInfo cultureInfo =
                    TextInfo textInfo = cultureInfo.TextInfo;
                    string normalizedString = textInfo.ToTitleCase(s.Value.ToUpper());
                    s.Value = normalizedString;

                var l = property as Property<string>;
                if (l != null)
                    // The value of the new string managed property the
                    //  type of book.
                  newBookType.Name = “BookType”;
                 newBookType.value = “”; //get the book type value


                // Set the time for when the properties where added by the
                //  web service.
                NewDateTimeBookModified.Name = “ModifiedByBookType”;
                NewDateTimeBookModified.Value = DateTime.Now;
            return processedItemHolder;


Step 2:  Create new Managed properties that the web service populates

$mp = New-SPEnterpriseSearchMetadataManagedProperty
-SearchApplication $ssa
–Name “ModifiedByBookType”
–Type 1
–Queryable $True
$mp.Refinable = $True
$mp = New-SPEnterpriseSearchMetadataManagedProperty
-SearchApplication $ssa
–Name “BookType”
–Type 1
–Queryable $True
$mp.Refinable = $True

Step 2:  Crawl and Search the content.

Enable and configure the web service callout.

$config = New-SPEnterpriseSearchContentEnrichmentConfiguration

$config.Endpoint = “” //Add the endpoint for ex as http://localhost:712/BookService.svc for your web service.

$config.InputProperties = “Author”, “Title”, “Publisher”

$config.OutputProperties =  “Author”, “Title”, “Publisher”,

“ModifiedByBookType, “BookType”


-SearchApplication $ssa

-ContentEnrichmentConfiguration $config

Start a full crawl now of the Books List. In the refinement panel in the search center, enter the new managed properties and enter them as refiners. Save, Checkin and Publish the changes to the Refinement web part.

Now, you can set the filter for book type such as Book is “Fiction” and see the results:)

Here is the link to MSDN documentation on Content Enrichment Web Service for SP 2013.

Posted in SP2013 | Tagged: , , , | Leave a Comment »

Backup Strategy for SharePoint 2010

Posted by Amit Bhatia on February 5, 2013

We were asked to develop the backup strategy on SharePoint 2010 for one of our client. There were two web applications with 3 site collections. Please note that this article does not give steps on how to write backup powershell commands. Instead this talks about the backup strategy being adopted in an enterprise environment.

Web Application 1 hosts 2 Site Collection

Web Application 2 hosts 1 site collection

The SP 2010 Farm backup requires different strategies for Database backup and Farm/web/site collection backup. Here I give you an example of one Site collection. You may backup multiple site collection as well.

SQL Server Backup (Created by the SQL DBA)

The backups are broken up into Full Backups and Transaction log backups.

  1. Full Backup

The database backups are run via a SQL Server job named UserDatabases.Full Backups.  The Full backup job runs each night at 8:45pm in the evening.  They are written to the directory U:\SQL Backup\<dbname>\.  The naming convention is <dbname>_backup_yyyy_mm_dd_hhmmss_<uniquenumber>.bak.

The database backup files are kept for three days and are then deleted.

2.        Transaction Logs

The transaction log backups are run via a SQL Server job that runs every 30 minutes on the half hour and the hour throughout the day.  The SQL Server job is named UserDatabases.Log Backups.  The logs are written to the directory U:\SQL Backups\<dbname>\. The naming convention is <dbname_yyyy_mm_dd_hhmmss_<lognumber>.trn.  The log files are kept for two days then deleted.

3.     Mirroring

The following Sharepoint databases are mirrored to the DR server  continuously.

  • WSS_Content (content database)

4.    Networker backups

The database files and the transaction logs are backed up by Networker daily at 9:00pm.

SharePoint Farm, Site Collection and Content Database Backup

We have three scheduled tasks running on the SharePoint Production Application Server.

  1. Full SP Farm Backup – It takes complete backup of the SharePoint Farm on which the SharePoint sites are hosted. The complete Farm backup includes backup of the farm configuration, Web Applications, IIS Configurations, Site Collections, Service Application databases, Content databases, and Configuration databases. This scheduled task calls the backup script to perform the backup and runs at 11:59 PM every Saturday (once in a week).
  2. RunDifferentialSPFarmBackup – It takes differential backup of the SharePoint Farm on which the SharePoint sites are hosted. It runs at 8:00 PM daily.
  3. HourlyContentDBBackup – It takes hourly backup of Content Database in order to ensure we have the latest Content database available in case of any disaster.

The Networker runs daily at 9:00 PM to copy these files over to the backup media tapes or devices.

The backups are run by powershell command tied to the Windows task scheduler.

Hope this simple strategy would help you in deciding your backup and Disaster recovery process in your enterprise.

Posted in SharePoint 2010 | Tagged: , | 1 Comment »

Install SharePoint 2010 Cumulative Updates

Posted by Amit Bhatia on February 5, 2013

This guide will explain the process to apply CU from Microsoft belonging to SharePoint 2010 Farm Servers. You need to patch the Farm servers which include “Software updates, update rollup, service pack, feature pack, critical update, security update, or hotfix”.

Refer to the below link for SharePoint Products latest available updates.

 For exact steps to install CU on SP 2010 refer to the document attached below.

SharePoint 2010 CU Steps <—- SharePoint 2010 CU Install steps document

I hope the above document and the steps mentioned in it would be of use to all the readers 🙂

Posted in Uncategorized | Tagged: , | Leave a Comment »

Searching For Value

"Helping You Master the Game of Investing"

Ideas with Conviction

The best thing about investing is the ease with which you can move your capital across different businesses, helping you capitalize on every opportunity..

CFA Institute Enterprising Investor

Practical analysis for investment professionals

Journeys of a Bumbling Trader

Learnings and Thoughts on Trading, Macroeconomics, Value Investing, Quantitative Finance, and Accounting

Flirting with Models

Research Library of Newfound Research

Alpha Ideas

Investment Blog for the Indian Markets

Fundoo Professor

Thoughts of a teacher & practitioner of value investing and behavioral economics