2008/10/23

Sprint WinMo Roundup

Over the last couple months I’ve been trying to find a new phone to replace my old and busted Sanyo VI2300 (read no-cam, flip-out brick). The rest of this blog is to help others who are looking to get into some new hotness that is available. Since I am a .Net develop and the time I have spent with any Apple product has been annoying at best, I was looking to upgrade to a Windows Mobile device.

The first go round I picked up a Samsung ACE (SPH-i325). This was a nice, sleek phone running Windows Mobile 6. Thinner than a pack of playing cards and had an easy-to-use thumbboard. The response on this phone was fast. It had a microSD slot for additional storage. The phone also sported two features that I instantly fell in love with Automatic Speech Recognition and Automatic Profiling. The phone also supported international calling via GSM networks using either the provided or a pre-paid SIM card (something I didn’t use). A pretty good phone for about $100. But the phone did have shortcomings. After playing around with Live Search and Google Maps, the lack of an integrated GPS radio really stood out as a deal breaker, especially after finding no upgrade path to Windows Mobile 6.1.

Round two went to an HTC Mogul (PPC6800). This phone was a brick. But it had a HUGE touch screen and a slide out QWERTY keyboard. It also had the much desired GPS and upgrade possibility to Windows Mobile 6.1. The phone also had a microSD slot. But it was lacking my new loves ASR and Automatic Profiling. Additionally, the ROM seemed a little sluggish. There are plenty of 3rd party ROMS and tools to get an ALMOST automatically switch to vibrate mode during meetings. For nearly $300, I’d want to be ecstatic about my phone. Not permanently searching to find ways to add the functionality that the ACE had.

At this point, you might ask “What’s the big deal with ASR and Auto Profiling?” ASR should be on EVERY phone, PERIOD. ASR is more than the voice recognition on many phones where you have to record a dumb tag (Call Joe Smith Mobile, for example). You then assign the tag to the action you want the voice tag to perform. ASR is so much better. ASR has some built-in actions (Call, Open, Play, etc). So you can put in a new contact (Joe Smith) and then instantly have the ability to speak “Call Joe Smith Mobile” and it just works. No pre-recording; you just speak. Additionally, the Ringer Profiles with Automatic Profiling should be on ANY phone that has calendaring applications. Automatic Profiling puts your phone into vibrate during scheduled meetings. This is especially important if your team has a rule of “your phone rings during meetings; you buy donuts”.

Finally, we come to my current hotness. The HTC Touch Diamond (MP6950) is a true iPhone killer. It’s a thin phone with a big, beautiful touch screen. It has a built-in GPS radio. Comes with Windows Mobile 6.1 pre-loaded and has both ASR and Automatic Profiling. Also the TouchFlo3Dtm UI is frickin sweet. The different touch keyboards available make input easy with stylus or by touch via Compact QWERTY. Additionally, there are accelerometers to determine if the phone is in portrait or landscape orientation. The phone comes with 4GB of internal storage. It is the greatest combination of features I’ve seen in a phone and it is the best phone Sprint has available. So much awesome is packed into this tiny form factor I don’t think I put it down for the first week.

I know that HTC is about to release the Touch Pro with all the goodness of the Touch Diamond with a slide out keyboard headphone jack (the diamond has a dongle for this) and a microSDHC slot for something like 32GB of additional storage. I think I’m good with the Diamond and have had enough of the headaches associated with returning phones for exchange. I don’t think the HTC models can be beat. The Touch Pro is great for those that like the slide out keyboard. The Touch HD is to have a camera to beat anything on the market. But if you can’t wait, you can’t go wrong with the Diamond.

2008/07/16

Got TFS Hate?

Jeff Hunsaker is looking for TFS haters and asking how to make your pain go away.

2008/07/07

When good coders write bad code.

I want to air some dirty laundry. Why is it that nobody seems to ask questions? Is it that we are too full of ourselves? Or that the industry rewards answers but not questions? Is it that someone who asks questions is seen as a n00b? Why do we have this stigma of "I have a dumb question" when we are new on a project?

I ran into a side-effect of this mentality on my latest project. Recently, I was working on optimizing some SQL queries that were running slow. In the middle of plowing through some of the newly added functions I came across one that really seemed to stick out.

In a nutshell, the function was to take a case identifier and return a "hierarchical" start date so that if ActualEndDate exists, return ActualBeginDate. If ApprovedEndDate exists, return ApprovedBeginDate. If CertifiedEndDate exists, return CertifiedBeginDate. If RequestedEndDate exists, return RequestedBeginDate. If all these conditions are false, return NULL.


Here is some sample data:





CaseIDRequestedBeginRequestedEndCertifiedBeginCertifiedEndApprovedBeginApprovedEndActualBeginActualEnd
17/1/20087/31/20087/1/20087/14/20087/1/20087/7/20087/3/20087/7/2008
28/1/20088/30/20088/3/20088/30/20088/4/20088/30/20088/10/2008NULL
39/1/20089/30/20089/2/2008NULL9/3/2008NULL9/4/2008NULL


And here are the calculated Hierarchical dates





CaseIdHierarchicalDate
17/3/2008
28/4/2008
39/1/2008


So looking at the function that is used to return the scalar value, I see this:



    7 CREATE FUNCTION [dbo].[getHierarchicalDate]


    8 (


    9     @CaseID bigint


   10 )


   11 RETURNS datetime


   12 AS


   13 BEGIN


   14 


   15 declare    @HierarchialDate datetime


   16 declare @ActualEnd datetime


   17 declare @ApprovedEnd datetime


   18 declare @CertifiedEnd datetime


   19 declare @RequestedEnd datetime


   20 


   21 set @ActualEnd =


   22     (select ActualEnd from cases where caseid = @caseID)


   23 set @ApprovedEnd =


   24     (select ApprovedEnd from cases where caseid = @caseID)


   25 set @CertifiedEnd =


   26     (select CertifiedEnd from cases where caseid = @caseID)


   27 set @RequestedEnd =


   28     (select RequestedEnd from cases where caseid = @caseID)


   29 if @RequestedEnd is not null


   30     if @CertifiedEnd is not null


   31         if @ApprovedEnd is not null


   32             if @ActualEnd is not null


   33                 set @HierarchialDate =


   34                     (select ActualBegin from cases


   35                         where caseid = @caseID)


   36             else


   37                 set @HierarchialDate =


   38                     (select ApprovedBegin from cases


   39                         where caseid = @caseID)


   40         else


   41             set @HierarchialDate =


   42                 (select CertifiedBegin from cases


   43                     where caseid = @caseID)


   44     else


   45         set @HierarchialDate =


   46             (select RequestedBegin from cases


   47                 where caseid = @caseID)


   48 


   49 RETURN @HierarchialDate


   50 


   51 END




Yup, 8 SELECT statements. And this was written by a developer who has been in architect roles. Someone respected. Someone who should know better. So with a little adjustment, the function become this:


    7 CREATE FUNCTION [dbo].[getHierarchicalDate]


    8 (


    9     @CaseID bigint


   10 )


   11 RETURNS datetime


   12 AS


   13 BEGIN


   14 


   15 declare    @HierarchialDate datetime


   16 


   17 select @HierarchialDate = CASE


   18       WHEN ActualEnd IS NOT NULL and ActualBegin is not null


   19         THEN ActualBegin


   20       WHEN ApprovedEnd IS NOT NULL and ApprovedBegin is not null


   21         THEN ApprovedBegin


   22       WHEN CertifiedEnd IS NOT NULL and CertifiedBegin is not null


   23         THEN CertifiedBegin


   24       WHEN RequestedEnd IS NOT NULL and RequestedBegin is not null


   25         THEN RequestedBegin


   26       ELSE NULL END


   27 from    [Cases] where caseid = @CaseID


   28 


   29 RETURN @HierarchialDate


   30 


   31 END




So we're down to one SELECT statement. One table scan. All because nobody took a step back to ask "is there a better way". This is something that seems to prevalent in our industry. Nobody wants to admit they are over their head, that they don't know everything, that someone else might have a better solution. I don't have the answers, so I'd love to hear any feedback.

2008/06/17

Software Development Meme

Ok,ok,ok... Foreman Bob/Steve Horn called me out on not have posted a meme on my personal coding history. <meme>

How old were you when you first started programming?
I'm not sure if it was 3rd or 4th grade that I started coding in BASIC on an Apple IIe for about 3 months. Much of it was minor tweaks to "busy-work" type applications.

I pretty much gave up on coding for quite a while. Games were much more interesting (NES was recently released and I <3 8-bit graphics). During my sophmore year in high school (1995), I caught wind of this "series of tubes" and picked up HTML for a class project. Designing the school's webpage... and using the much loved nested tables and shim images.

I continued web development through college even though I changed majors out of the CS department becuase it was all console UNIX C/C++. The IT degree, yes that's INDUSTRIAL Technology had the VBA and ASP courses. As an added bonus, I got to cut and melt things.

What was your first language?
I really don't want to say BASIC as it was mostly copy from book and tweak parameters. So, I'll go with the safe bet and say HTML.

What languages have you used since youstarted programming?
BASIC, HTML, JavaScript, PASCAL, C, C++, VB/VBA/VBScript, PL/SQL, T-SQL, WiseScript, InstallScript, VB.Net, C#, WinBatch, AJAX. Also, a bit of G-Code and whatever Rockwell Automation PLC's use.

What was your first professional programming gig?
That would be working at Ohio University to earn beer money. I had moved up from help desk to NT administration. This required writing batch files, VBScripts to automate system setup and configuration. Somehow, it also involved creating the department websites.

If you knew then what you know now, would you have started programming?
Actually, I would have started programming earlier. I would have hopped into programming without the time suck of Infrastructure Operations until I "paid my dues".

If there is one thing you learned along the way that you would tell new developers, what would it be?
Be a jack of all trades and a master of (at least) one. You'll need to have a little bit of knowledge in many subjects to be adaptable but find a niche and be the subject matter expert in that area.

What's the most fun you've ever had ... programming?
Somehow, its the projects that are the most annoying at the time that are the most fun in retrospect. But I'd probably have to go with tweaking out TFS with ASMX subscriptions to trigger automated builds, customizations, and deployments. Spinning up a managed instance of MSBuild engine when everyone else is running powershell makes me giddy.

Next up
Greg Bahrey
Arnulfo Wing
Dan Shultz
Alexey Govorine

</meme>

2008/05/05

More dead kittens

A co-worker of mine enjoys using the axiom, "Everytime you put business logic in the database, god kills a kitten". Needless to say, we have a large quantity of dead kittens due to our application. Kittens or no, my take is that if you are going to put logic in the database, you better be doing error handling.

So imagine my distain when I told not to put a transaction and try...catch in a stored procedure performing multiple inserts and updates but to spin up a SqlTransaction object in C# encompased in a try catch in the application.

You may ask what the difference is. There is still a transaction and catching of errors to avoid a nasty message bubbling up to the client.

The biggest advantage I see on putting the extra code onto the database is that as soon as the error occurs the batch is terminated. Using the C# route to accomplish the rollback and error handling, allows statements to execute after the initial failure. Thereby adding unnecessary load to the server as it attempts to continue processing. Additionally, the root failure can easily be lost as errors pile up due to subsequent line executions.

A simple example of this can be accomplished with the following stored proc being called from a form load event.


CREATE PROCEDURE SomeErrors
AS
BEGIN

DECLARE @Zero int,
@Value int
SET @Zero = 0

SELECT @Value = 100/@Zero
SELECT 'The value is ' + @Value
END



The error bubbled up is "Divide by zero error encountered.\r\nConversion failed when converting the varchar value 'The value is ' to data type int."

As you can see both the divide by zero and the invalid casting were executed. With a simple modification, the server will report the first error and stop the batch immediately.

ALTER PROCEDURE SomeErrors
AS
BEGIN

DECLARE @Zero int,
@Value int
SET @Zero = 0
BEGIN TRY
SELECT @Value = 100/@Zero
SELECT 'The value is ' + @Value
END TRY
BEGIN CATCH
DECLARE @ErrorSeverity INT, @ErrorNumber INT, @ErrorMessage NVARCHAR(4000), @ErrorState INT
SELECT @ErrorSeverity = ERROR_SEVERITY(),
@ErrorNumber = ERROR_NUMBER(),
@ErrorMessage = ERROR_MESSAGE(),
@ErrorState = ERROR_STATE()
IF @ErrorState = 0
SET @ErrorState = 1

RAISERROR ('ERROR OCCURED:%d; %s', @ErrorSeverity, @ErrorState, @ErrorNumber, @ErrorMessage)
END CATCH
END



Now we only see the first error bubble up to the application: "ERROR OCCURED:8134; Divide by zero error encountered."

I'd love to hear what other opinions are on this matter.

2008/04/26

Scott Hanselman talks MultiCore MSBuilding

Use that extra core or three you have sitting idle in your box.

Also see Scott's companion article on how to get your build server to stop being limited to a single core on the companion post http://www.hanselman.com/blog/FasterBuildsWithMSBuildUsingParallelBuildsAndMulticoreCPUs.aspx

Great info as usual Scott.

2008/04/23

Using CTE instead of CURSOR

Today, I came upon a need to change a Scalar Value Function in SQL 2005 that we are using to generate confirmation numbers based on an algorithm much like the following

Characters 1-5 will be the number of days since 1900.01.01; zero-padded if needed

Characters 6-10 will be the number of seconds since Today 00:00:00; zero-padded if needed

If this number is already in use, increment by 1 until reaching an unsed number


So I had two options to ensure that I properly checked for the numerically lowest available value.


  1. Create a cursor incrementing a variable that stored the generated confirmation number until I ran out of matches.

  2. Use a Common Table Expression and have it recursivly loop through to get the max sequential number starting at the generated confirmation number


As a disclaimer, I don't really like using cursors, I prefer to do as much as I can as a batch process. Half the time, I have to look up the syntax to ensure I am properly building up and deallocating the cursor. In short, I took option #2. CTEs are supposed to be easier resource-wise, maybe, possibly. Will map the execution plan at a later time.


In short, the following was the solution chosen. Comments are inline but if you have any questions, comments, concerns, feel free to leave feedback.


-- =============================================
--
Author: Paul Montgomery
-- Create date: 2008.04.23
-- Description: Gets a
time-based confirmation number consisting of number of days
-- Blog:
http://betterlivingthroughcoding.blogspot.com/2008/04/using-cte-instead-of-cursor.html
--
=============================================
CREATE FUNCTION
GetMeAConfirmationNumberForNow()
RETURNS
NVARCHAR(50)
AS
BEGIN
DECLARE @days VARCHAR(5),
@seconds
VARCHAR(5)
SET @days = RIGHT('00000' + CAST(DATEDIFF(d, 0, GetDate()) AS
VARCHAR), 5)
SET @seconds = RIGHT('00000' + CAST(DATEDIFF(s,
CONVERT(varchar(10), GETDATE(), 101), GetDate()) AS VARCHAR), 5)
--Get what
would be this seconds confirmation number
DECLARE @ConfirmationNumber
nvarchar(50)
SET @ConfirmationNumber = @days + @seconds
SET
@ConfirmationNumber = '3955979792'
/*
-- If we were completely sure there
would NEVER be a "future" confirmation
-- number being mistakenly put in the
table, we could just do this.
IF EXISTS (SELECT ConfirmationNumber
FROM
SomeTable
WHERE ConfirmationNumber <> 'SomeBadData' --weed out any
invalid data
AND CAST(ConfirmationNumber AS BIGINT) >
CAST(@ConfirmationNumber AS BIGINT)
BEGIN
SELECT @ConfirmationNumber =
CAST(MAX(CAST(ConfirmationNumber AS BIGINT)) + 1 AS NVARCHAR(50))
FROM
SomeTable
END
-- But since we had some crazy data, I wanted to be sure
that I was going
-- to pull the smallest unused number that was equal to or
greater than the
-- confirmation number genereated via our
algorithm
*/
/*
Select all the confirmation numbers from our table that
are numerically greater than @ConfirmationNumber
*/
DECLARE @TempTable
TABLE(
ConfirmationNumber nvarchar(50))
INSERT INTO
@TempTable(ConfirmationNumber)
SELECT ConfirmationNumber
FROM
SomeTable
WHERE ConfirmationNumber <> 'SomeBadData' --had to remove
some bogus rows
AND CAST(ConfirmationNumber AS BIGINT) >
CAST(@ConfirmationNumber AS BIGINT)
GROUP BY ConfirmationNumber --Yep had
some dupes posted, lukily only in dev/test
--PlaceHolder
DECLARE
@MaxConfirmationNumber nvarchar(50);
--Time for CTE to find my sequential
values
WITH confirmation_numbers (ConfirmationNumber)
AS
(
SELECT
ConfirmationNumber
FROM @TempTable
WHERE CAST(ConfirmationNumber AS
BIGINT) = CAST(@ConfirmationNumber AS BIGINT) + 1
UNION ALL
--This is
where the magic happens
SELECT tt.ConfirmationNumber
FROM @TempTable
tt
INNER JOIN confirmation_numbers
ON CAST(tt.ConfirmationNumber AS
BIGINT) = CAST(confirmation_numbers.ConfirmationNumber AS BIGINT) +1
)
--
We need to increment the max by 1 and get it back to an nvarchar
SELECT
@MaxConfirmationNumber = CAST(MAX(CAST(ConfirmationNumber AS BIGINT)) + 1 AS
NVARCHAR(50))
FROM confirmation_numbers
--Check that we have a this
confirmation or sequential ones higher
IF (@ConfirmationNumber <= @MaxConfirmationNumber) SET @ConfirmationNumber = @MaxConfirmationNumber --SELECT @ConfirmationNumber RETURN @ConfirmationNumber END GO

2008/04/21

Delay signing Click One application

Along the lines of build one; deploy many, I have been working on setting up a ClickOnce application that would be compiled and published once. This published set of files would then be configured for particular environment, signed, and set for public consumption. The problem is that when a ClickOnce application is published, the application and deployment manifests get signed and any changes to the files mean that the checksum values no longer line up. So how do you publish once and then make changes without breaking the newly signed manifests? Below is a subset of the proj file we are using. The csproj is called with OutputPath=$(DropLocation)\Setup\Raw\ and Targets=PublishOnly


<?xml version="1.0" encoding="utf-8" ?><project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" defaulttargets="Publish">
<import project="$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.Targets">
<propertygroup>
<droplocation condition="'$(DropLocation)'==''">C:\VSDumpingGround\OurApp</droplocation>
<magepath condition="'$(MagePath)'==''">C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bin\mage.exe</magepath>
<certfile condition="'$(CertFile)'==''">$(DropLocation)\OurApp.Key.pfx</certfile>
<signingcertpassword condition="'$(SigningCertPassword)'==''">password</signingcertpassword>
<publishername condition="'$(PublisherName)'==''">Our Company</publishername>
<buildnumberfile>C:\VSDumpingGround\BuildNumber.txt</buildnumberfile>
</propertygroup>
<propertygroup>
<publishdependson>
MageOurApp;
RemoveRawFiles
</publishdependson>
</propertygroup>
<target name="Publish" dependsontargets="$(PublishDependsOn)">
</target>
<target name="VersionOurApp">
<!-- MSBuild Community Task get and increment the versin number stored in text file-->
<version revisiontype="Increment" versionfile="$(BuildNumberFile)">
<output propertyname="Major" taskparameter="Major">
<output propertyname="Minor" taskparameter="Minor">
<output propertyname="Build" taskparameter="Build">
<output propertyname="Revision" taskparameter="Revision">
</version>
<version major="$(Major)" minor="$(Minor)" build="$(Build)" revision="$(Revision)">
</version>
</target>
<target name="MoveApplicationFiles" dependsontargets="VersionOurApp">
<createproperty value="$(DropLocation)\Setup\OurApp.application">
<output propertyname="ApplicationFile" taskparameter="Value">
</createproperty>
<!-- Move all files in the app.publish directory up to the setup folder-->
<createitem include="$(DropLocation)\Setup\Raw\app.publish\*" exclude="$(ApplicationFile)">
<output itemname="MoveSetup" taskparameter="Include">
</createitem>
<Copy SourceFiles="@(MoveSetup)"
DestinationFolder="$(DropLocation)\Setup\"/>
<!-- Remove any duplicate files from the list-->
<createitem include="$(DropLocation)\Setup\Raw\app.publish\Application Files\OurApp*\*">
<output itemname="AppFilesRoot" taskparameter="Include">
</createitem>
<removeduplicates inputs=""> '%(rootdir)%(directory)')">
<output itemname="FilteredAppFilesRoot" taskparameter="Filtered">
</removeduplicates>
<createitem include="%(FilteredAppFilesRoot.Identity)**\*">
<output itemname="MoveAppFiles" taskparameter="Include">
</createitem>
<!-- Copy the deploy files to the Application Files\OurApp_X_X_X_X directory -->
<Copy SourceFiles="@(MoveAppFiles)"
DestinationFolder="$(DropLocation)\Setup\Application Files\OurApp_$(Major)_$(Minor)_$(Build)_$(Revision)\%(RecursiveDir)"/>
<removedir directories="$(DropLocation)\Setup\Raw\">
</target>
<propertygroup>
<prepourappdependson>
MoveApplicationFiles;
VersionOurApp;
ConfigureOurApp
</prepourappdependson>
</propertygroup>
<target name="PrepOurApp" dependsontargets="$(PrepOurAppDependsOn)">
<createproperty value="$(DropLocation)\Setup\Application Files\OurApp_$(Major)_$(Minor)_$(Build)_$(Revision)">
<output taskparameter="Value" propertyname="ApplicationDirectory">
</createproperty>
<createproperty value="$(DropLocation)\Setup\Application Files\OurApp_$(Major)_$(Minor)_$(Build)_$(Revision)\OurApp.exe.manifest">
<output propertyname="ManifestFile" taskparameter="Value">
</createproperty>
<createproperty value="$(DropLocation)\Setup\Raw">
<output propertyname="RawDirectory" taskparameter="Value">
</createproperty>
<!-- Remove the manifest file if it exists-->
<delete files="$(ManifestFile)" continueonerror="true">
<!-- copy files to a new Raw directory-->
<CreateItem Include="$(ApplicationDirectory)\**\*"
Exclude="$(ManifestFile)">
<Output ItemName="DeployFiles"
TaskParameter="Include"/>
</createitem>
<!-- This removes the .deploy from the files as you copy them to a new Raw directory-->
<Copy SourceFiles="@(DeployFiles)"
DestinationFiles="@(DeployFiles -> '$(DropLocation)\Setup\Raw\%(RecursiveDir)\%(FileName)')"/>
</target>
<propertygroup>
<configureourappdependson>
VersionOurApp;
MoveApplicationFiles
</configureourappdependson>
</propertygroup>
<target name="ConfigureOurApp" dependsontargets="$(ConfigureOurAppDependsOn)">
<!-- See my blog about editing XML config files
You would use the files in the newly created $(DropLocation)\Setup\Raw\... location(s)-->
</target>
<propertygroup>
<mageourappdependson>
ConfigureOurApp;
VersionOurApp;
PrepOurApp
</mageourappdependson>
</propertygroup>
<target name="MageOurApp" dependsontargets="$(MageOurAppDependsOn)">
<!-- Generate new application manifest-->
<GenerateApplicationManifest
AssemblyName="OurApp.exe"
AssemblyVersion="$(Major).$(Minor).$(Build).$(Revision)"
EntryPoint="$(RawDirectory)\OurApp.exe"
OutputManifest="$(ManifestFile)"/>
<!-- Sign the application manifest-->
<!-- %22 takes the place of "-->
<!-- This signs the newly created manifest-->
<exec command="%22$(MagePath)%22 -Update %22$(ManifestFile)%22 -fd %22$(RawDirectory)%22 -cf %22$(CertFile)%22 -pwd $(SigningCertPassword)">
<!-- Generate new deployment manifest-->
<GenerateDeploymentManifest
AssemblyName="OurApp.application"
AssemblyVersion="$(Major).$(Minor).$(Build).$(Revision)"
DeploymentUrl="http://somecompany.com/setup/OurApp.application"
DisallowUrlActivation="false"
EntryPoint="$(ManifestFile)"
Install="true"
MapFileExtensions="true"
MinimumRequiredVersion ="$(Major).$(Minor).$(Build).$(Revision)"
OutputManifest="$(ApplicationFile)"
Product="Our Product"
Publisher="$(PublisherName)"
UpdateEnabled="true"
UpdateMode="Foreground">
</generatedeploymentmanifest>
<!-- Sign the deployment manifest-->
<!-- %22 takes the place of "-->
<!-- We had issues with Publisher and Product not properly propogating to the .application file. We are on framework 3.0 so we can't use UseApplicationTrust, Publisher, Product on the GenerateApplicationManifest task-->
<!-- This signs the newly created manifest and forces in Publisher/Product. Marks the deployment manifest as used for trust-->
<exec command="%22$(MagePath)%22 -Update %22$(ApplicationFile)%22 -cf %22$(CertFile)%22 -providerurl %22http://somecompany.com/setup/OurApp.application%22 -Tofile %22$(ApplicationFile)%22 -appm %22$(ManifestFile)%22 -pwd $(SigningCertPassword) -pub %22$(PublisherName)%22 -UseManifestForTrust true">
</target>
<target name="RemoveRawFiles" dependsontargets="MageOurApp">
<removedir directories="$(RawDirectory)">
</target>
</project>

XML editing via MSBuild

Every environment I have been in has strived for a single build that can be deployed to multiple environments. This means one compilation and a single set of binaries. Different settings per environment are to be set in config/xml files. No big surprise. And yes, I know that the MSBuild Community Tasks have an xml editing task. I prefer mine. In part to the config files having a single naespace and if Microsoft desides to change the namespace declaration, I won't have to revist my proj files. Also, I have options to replace full or partial values for attributes or InnerText. Also, you can recursively search a directory for files matching the search pattern.

Bringing on the class




using System;
using System.Collections.Generic;
using System.Xml;
using System.Text;
using Microsoft.Build.Utilities;
using Microsoft.Build.BuildEngine;
using Microsoft.Build.Framework;
using System.IO;
namespace PaulMontgommery.Custom.Tasks
{
///<summary>
/// Updates a XML document using a XPath.
/// </summary>
/// <example>Update a XML element.
/// <code><![CDATA[
/// <xmledit document="C:\VSProjects\MyProject\*.config" xpath="//configuration/appSettings/add[@key='SMTPPort']" value="26">
/// <xmledit document="*.config" folder="C:\VSProjects\MyProject\" xpath="//configuration/appSettings/add[@key='SMTPPort']" value="26">
/// <xmledit document="*.config" folder="C:\VSProjects\MyProject\" xpath="//p:configuration/p:appSettings/p:add[@key='SMTPPort']" value="26" prefix="p">
/// <xmledit document="*.config" folder="C:\VSProjects\MyProject\" xpath="//p:configuration/p:appSettings/p:add[@key='SMTPPort']" value="26" prefix="p" attribute="value">
/// ]]></code>
/// </example>
/// <remarks>
/// The XML node being updated must exist before using the XmlUpdate task.
/// </remarks>
public class EditXml : Task
{
#region Class Variables
string _document = string.Empty;
string _folder = string.Empty;
string _xPath = string.Empty;
string _value = string.Empty;
string _replacedText = string.Empty;
string _attribute = string.Empty;
string _prefix = string.Empty;
bool _recursive = false;
bool _continueOnError = false;
bool _condition = true;
#endregion
#region Public Properties
/// <summary>
/// Required. Document to perform edit on. Wildcards are allowed.
/// </summary>
[Required]
public string Document
{
get { return _document; }
set { _document = value; }
}
/// <summary>
/// Optional. Folder to begin searching.
/// </summary>
public string Folder
{
get { return _folder; }
set { _folder = value; }
}
/// <summary>
/// Required. XPath statement to find value to edit.
/// </summary>
[Required]
public string XPath
{
get { return _xPath; }
set { _xPath = value; }
}
/// <summary>
/// Optional. Namespace prefix for XPath statement.
/// </summary>
public string Prefix
{
get { return _prefix; }
set { _prefix = value; }
}
/// <summary>
/// Required. Value to be placed into document as InnerText or as "value" attribute if innertext is null.
/// </summary>
[Required]
public string Value
{
get { return _value; }
set { _value = value; }
}
/// <summary>
/// Optional. Value to be replaced with <see cref="Value">.
/// </summary>
public string ReplacedText
{
get { return _replacedText; }
set { _replacedText = value; }
}
/// <summary>
/// Optional name of attribute to perform edit on.
/// </summary>
public string Attribute
{
get { return _attribute; }
set { _attribute = value; }
}
/// <summary>
/// Optional. Specifies whether subfolders of <see cref="Folder">should be searched.
/// Default is false.
/// </summary>
public bool Recursive
{
get { return _recursive; }
set { _recursive = value; }
}
/// <summary>
/// Optional. Specifies whether process should continue if an exception is thrown. Default is false.
/// </summary>
public bool ContinueOnError
{
get { return _continueOnError; }
set { _continueOnError = value; }
}
/// <summary>
/// Optional. A Run-time check to see if this process should execute. Default is true.
/// </summary>
public bool Condition
{
get { return _condition; }
set { _condition = value; }
}
#endregion
#region Public Methods
public override bool Execute()
{
// System.Diagnostics.Debugger.Launch();
if (!_condition)
return true;
bool success = false;
try
{
//We must have a folder or document to edit
if (string.IsNullOrEmpty(_folder) && _recursive)
throw new NullReferenceException("Folder must be specified for recursive searches.");
//We must have an XPath statement or ReplacedText to edit
if (string.IsNullOrEmpty(_xPath) && string.IsNullOrEmpty(_replacedText))
throw new NullReferenceException("XPath or ReplacedText must be specified.");
List<string> files = getFiles();
if (files == null)
throw new NullReferenceException(string.Format("Could not find a part of the path '{0}\\{1}'", _folder, _document));
foreach (string file in files)
{
editXml(file);
}
success = true;
}
catch (Exception ex)
{
// System.Diagnostics.Debugger.Launch();
Log.LogErrorFromException(ex);
success = _continueOnError;
}
return (success _continueOnError);
}
#endregion
#region Private Methods
#region editXml
private void editXml(string file)
{
//load file into XmlDocument
Log.LogMessage(MessageImportance.Normal, string.Format("Loading file '{0}'.", file));
XmlDocument doc = new XmlDocument();
doc.Load(file);
//Get namespace manager
XmlNamespaceManager nsmgr = new XmlNamespaceManager(doc.NameTable);
if (!string.IsNullOrEmpty(doc.DocumentElement.NamespaceURI))
{
if (string.IsNullOrEmpty(Prefix))
{
nsmgr.AddNamespace("ns", doc.DocumentElement.NamespaceURI);
//Inject the namespace into the xpath query
XPath = XPath.Replace("/", "/ns:").Replace("/ns:/ns:", "//ns:");
}
else
{
nsmgr.AddNamespace(Prefix.EndsWith(":") ? Prefix.Replace(":", "") : Prefix, doc.DocumentElement.NamespaceURI);
}
}
int i = 0;
foreach (XmlNode node in doc.SelectNodes(XPath, nsmgr))
{
Log.LogMessage(MessageImportance.Normal, string.Format(" Found node '{0}'.", XPath));
if (string.IsNullOrEmpty(Attribute))
{
//Value setting
if (string.IsNullOrEmpty(ReplacedText))
{
Log.LogMessage(MessageImportance.Normal, string.Format("\t Setting Value to '{0}'.", Value));
if (node.Value == null && string.IsNullOrEmpty(node.InnerText))
node.Attributes["value"].Value = Value;
else
node.InnerText = Value;
}
else
{
//Value replacement
string oldvalue;
if (node.Value == null && string.IsNullOrEmpty(node.InnerText))
{
oldvalue = node.Attributes["value"].Value;
node.Attributes["value"].Value = node.Attributes["value"].Value.Replace(ReplacedText, Value);
Log.LogMessage(MessageImportance.Normal, string.Format("\t Replaced '{0}' with '{1}'.", oldvalue, node.Attributes["value"].Value));
}
else
{
oldvalue = node.InnerText;
node.InnerText = node.InnerText.Replace(ReplacedText, Value);
Log.LogMessage(MessageImportance.Normal, string.Format("\t Replaced '{0}' with '{1}'.", oldvalue, node.InnerText));
}
}
}
else
{
if (string.IsNullOrEmpty(ReplacedText))
{
//Attribute setting
Log.LogMessage(MessageImportance.Normal, string.Format("\t Setting Attribute '{0}' to '{1}'.", Attribute, Value));
node.Attributes[Attribute].Value = Value;
}
else
{
//Attribute replacement
string oldvalue = node.Attributes[Attribute].Value;
node.Attributes[Attribute].Value = node.Attributes[Attribute].Value.Replace(ReplacedText, Value);
Log.LogMessage(MessageImportance.Normal, string.Format("\t Replaced value of Attribute '{0}' from '{1}' to '{2}'.", Attribute, oldvalue, node.Attributes[Attribute].Value));
}
}
i++;
//end of foreach
}
if (i == 0)
Log.LogWarning("Unable to locate node '{0}'.", XPath);
Log.LogMessage(MessageImportance.Normal, string.Format("Document completed with {0} change(s).", i));
doc.Save(file);
}
#endregion
#region getFiles
private List<string> getFiles()
{
if (Document.Contains("\\") && !string.IsNullOrEmpty(Folder))
throw new NotSupportedException("Document cannot have path information when Folder is specified.");
if (!string.IsNullOrEmpty(Folder))
return getFiles(Folder, Document);
else
return getFiles(Document);
}
private List<string> getFiles(string Document)
{
// Split folder from filename
return getFiles(Document.Substring(0, Document.LastIndexOf("\\")),
Document.Substring(Document.LastIndexOf("\\") + 1));
}
private List<string> getFiles(string Folder, string Document)
{
List<string> files = new List<string>();
// Add item for each file matching the search criteria
foreach (string file in Directory.GetFiles(Folder, Document))
files.Add(file);
//Check sub directories for additional files.
if (Recursive)
{
//Call getFiles with each subdirecotry and the Document.
foreach (string directory in Directory.GetDirectories(Folder))
files.AddRange(getFiles(directory, Document));
}
return files;
}
#endregion
#endregion
}
}




Now, how to use these roughly 300 lines of code. As you can see in the XML documentation prior to the class declarations, there are the following examples:


<xmledit document="C:\VSProjects\MyProject\*.config" xpath="//configuration/appSettings/add[@key='SMTPPort']" value="26">
<xmledit document="*.config" folder="C:\VSProjects\MyProject\" xpath="//configuration/appSettings/add[@key='SMTPPort']" value="26">
<xmledit document="*.config" folder="C:\VSProjects\MyProject\" xpath="//p:configuration/p:appSettings/p:add[@key='SMTPPort']" value="26" prefix="p">
<xmledit document="*.config" folder="C:\VSProjects\MyProject\" xpath="//p:configuration/p:appSettings/p:add[@key='SMTPPort']" value="26" prefix="p" attribute="value">

If you have any suggestions, let me know.

2008/03/27

Getting info back out of a custom MSBuild Task



So I am working on getting some build automation and such. Unfortunatly, my client is running Visual Source Safe (how I miss Team Foundation Server). So I am tasked with getting the latest code out of VSS, build numerous configurations of our solutions ensuring that *.config are set to reflect the envirounment required for each configuration to be built. After I get the code, I want to label it so if we need to go back and rebuild version x.y.z we can grab its label.




In TFS, I would just let the vanilla installation and TeamBuild.targets deal with coming up with a label number. But again, I gots no TFS. So I am going to generate a label based on the date and use MSBuild Community Tasks (available at http://msbuildtasks.tigris.org/) to handle the communication to VSS.




So how do you get the date formatted the way you want? Well, there is the easy System.DateTime.ToString(string format), right? All you have to do to get to this functionality is to write an MSBuild custom task.




Enough talk, on to the code. You start by creating a new class library project and adding references to Microsoft.Build.Framework and Microsoft.Build.Utilities.





Now you need to start building your class. Something like this should do the trick:



using System;
using Microsoft.Build.Framework;
using Microsoft.Build.Utilities;
namespace MSBuilding
{
public class GetDateTime : Task
{
DateTime _now;
string _format = string.Empty;
///
/// The format string to be passed to DateTime.ToString method
///

[Required]
public string Format
{
set { this._format = value; }
}
///
/// This is the TaskParameter that is returned.
///

[Output]
public string ReturnValue
{
get { return _now.ToString(_format); }
}
///
/// Required to implement the Microsoft.Build.Utilities abstract class.
///

///
public override bool Execute()
{
bool result = true;
try
{
_now = DateTime.Now;
}
catch (Exception ex)
{
//something failed set the result to false and log the Exception
result = false;
bool showStackTrace = true;
Log.LogErrorFromException(ex, showStackTrace);
}
return result;
}
}
}


Here's what's happening. I'm declaring the class GetDateTime and having it inherit from Microsoft.Build.Utilities.Task Then I'm setting up some class-level variables for _format and _now. Then I am setting a required property for Format this ensures that a format is passed in from the MSBuild project. Also, I am putting minmal logic into the Execute method as this class is really doing minimal.


To get info back out of the task, you need some properties with the [Output] attribute tagged onto them. In the case of this class, we have one property, ReturnValue.


Now all we have to do is write up a simple proj file and invoke the task we wrote. A simple, single target project is good enough for testing:

<?xml version="1.0" encoding="utf-8" ?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="Test">
<UsingTask AssemblyFile="MSBuilding.dll" TaskName="MSBuilding.GetDateTime"/>
<Target Name="Test">
<GetDateTime
Format="yyyyMMdd_HHmmss_ff">
<Output TaskParameter="ReturnValue" PropertyName="MyTime"/>
</GetDateTime>
<Message Text="$(MyTime)"/>
</Target>
</Project>
Now we just call MSBuild.exe passing our proj file as the only argument.


That's about it. Look for more MSBuild goodness as I get time.

2008/03/25

ADO.Net Data Services: Astoria

Steve Horn has some great information to answer the burning questions about Astoria.

A brief overview of what is to come

In my "spare time" I'll be posting any info that has been helpful in blasting away roadblocks in my development projects in topics consisting of Winform development, ASP.Net, Team Foundation Server, and SQL Server.

I make no promises or guarantees on the content or frequency, so use at your own risk.

Comments