## Create a COM+ Application with Powershell

$comAdmin = New-Object -comobject COMAdmin.COMAdminCatalog$apps = $comAdmin.GetCollection(“Applications”)$apps.Populate();

$newComPackageName = “MyFirstCOMPackage”$appExistCheckApp = $apps | Where-Object {$_.Name -eq $newComPackageName} if($appExistCheckApp)
{
$appExistCheckAppName =$appExistCheckApp.Value(“Name”)
“This COM+ Application already exists : $appExistCheckAppName” } Else {$newApp1 = $apps.Add()$newApp1.Value(“Name”) = $newComPackageName$newApp1.Value(“ApplicationAccessChecksEnabled”) = 0 <# Security Tab, Authorization Panel, “Enforce access checks for this application #>

<# See http://msdn.microsoft.com/en-us/library/ms686107(v=VS.85).aspx#identity for full documentation. #>

<# Optional (to set to a specific Identify) #>
$newApp1.Value(“Identity”) = “MyDomain\myUserName”$newApp1.Value(“Password”) = “myPassword”

$saveChangesResult =$apps.SaveChanges()
“Results of the SaveChanges operation : $saveChangesResult” } Full documentation of the properties here. http://msdn.microsoft.com/en-us/library/ms686107(v=VS.85).aspx#applicationaccesschecksenabled I’m a C# developer. But Powershell rocks the suburbs for alot of tasks! Advertisements Posted in Software Development | Leave a comment ## Hello world! Welcome to WordPress.com. This is your first post. Edit or delete it and start blogging! Posted in Uncategorized | 1 Comment ## Bug in Documentation : Microsoft Access Database Engine 2010 Redistributable ## Microsoft Access Database Engine 2010 Redistributable http://www.microsoft.com/downloads/en/details.aspx?familyid=C06B8369-60DD-4B64-A44B-84B371EDE16D&displaylang=en There is a bug in the documentation at the download page. The documentation says: 1.If you are the user of an application, consult your application documentation for details on how to use the appropriate driver. 2.If you are an application developer using OLEDB, set the Provider argument of the ConnectionString property to “Microsoft.ACE.OLEDB.12.0” If you are connecting to Microsoft Office Excel data, add “Excel 14.0” to the Extended Properties of the OLEDB connection string. The "Excel 14.0" is the issue. It should be "Excel 12.0". Her are a few connection strings to provide full context. // Old School Jet, been around for a while "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=’C:\MyOldSchoolFile.xls’;Extended Properties=’Excel 8.0;HDR=NO;IMEX=1;’;" //Newer version with xslx "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=’C:\MyXlsXFile.xlsx’;Extended Properties=’Excel 12.0 Xml;HDR=NO;IMEX=1;’;" //Newer version, any xls file "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=’C:\AlmostAnyExcelVersionFileRunningUnder64BitOS.xls’;Extended Properties=’Excel 12.0;HDR=NO;IMEX=1;’;" Don’t take the above as absolute truth. But it should draw attention to the issue (if you’re experiencing it), and help with some more google (errr. bing) searches. Here are some other phrases which might lead you here. The OLE DB provider "Microsoft.Jet.OLEDB.4.0" has not been registered ( Trying to read an excel file on a 64 bit O/S? The purple string above should work for you under a 64 bit O/S. ) "Could not find installable ISAM" ( This might show up because of the "14.0" vs "12.0" bug mentioned above ) http://social.msdn.microsoft.com/Forums/en/adodotnetdataproviders/thread/686d8ebb-0da3-4f0c-bf16-9c650f8dcb32 http://www.connectionstrings.com/excel-2007 Posted in Software Development | 6 Comments ## RANT : Hard Coded Security Roles :::::::::::sigh::::::::::::: If I come across one more hard coded security roles brownfield application, I think I’m gonna throw my chair out the window. Today, I came across an application that does type-checking to determine security. The code goes something like this: AbstractUser user = SomeGetUserMethod(“myname”, “mypassword”); /* the above method returns 1 of a few concrete classes which implement AbstractUser */ // type-checking time!! if (user.GetType() == typeof(BAL.Domain.AdminUser)) { MyWebPage.btnDeleteAllEmployees.Visible = true; MyWebPage.btnUpdateMyOwnProfile.Visible = true; MyWebPage.btnLogOut.Visible = true; } if (user.GetType() == typeof(BAL.Domain.NormalUser)) { MyWebPage.btnDeleteAllEmployees.Visible = false; MyWebPage.btnUpdateMyOwnProfile.Visible = true; MyWebPage.btnLogOut.Visible = true; } or maybe you have seen (bool representing 1 of a few hard coded roles) bool isAdmin = SomeMethodToFigureOutHardCodedRoles(); bool isNormalUser = SomeMethodToFigureOutHardCodedRoles(); if (isAdmin) { MyWebPage.btnDeleteAllEmployees.Visible = true; MyWebPage.btnUpdateMyOwnProfile.Visible = true; MyWebPage.btnLogOut.Visible = true; } if (isNormalUser) { MyWebPage.btnDeleteAllEmployees.Visible =false ; // NormalUser cannot do this! So hide the button. MyWebPage.btnUpdateMyOwnProfile.Visible = true; MyWebPage.btnLogOut.Visible = true; } You know the drill. And the pre-project-starts-to-be-constructed discussion goes something like this: “Today, we have 3 roles, lets base all our security off those 3 (hard coded) roles. Those roles will ~~never~~ change. And my thoughts on a finite set of hard coded Role(s). That’s fine for your kid’s soccer club fan page. That is NOT fine for a professional DotNet developer creating a business application. I’ve written an example on how NOT do it (so you can see the pattern clearly). The problem will always end up being that the 3 (N) number of roles will never be sufficient. A DAY WILL ARRIVE WHEN THE BUSINESS OWNERS WANT A SLIGHTLY DIFFERENT ROLE. They’ll say that want to “tweak” an existing role, but what they really mean is that they want a new role that is very close to an existing role. But “a tad bit different” is still different. But you (or your “architect” if you want to blame someone else) did not account for this at the beginning of the project. You can check out: http://www.lhotka.net/weblog/CommentView,guid,9efcafc7-68a2-4f8f-bc64-66174453adfd.aspx for a discussion. I can’t blame just the developer(s). Microsoft and its “super easy p-easy” “.IsInRole()” method helped propagate this ugliness. And then this kind of code: [PrincipalPermission(SecurityAction.Demand,Role=”Teller”)] And I completely agree with the article above and its assertion “At runtime, when the user is actually using the application, the roles are entirely meaningless“. Listen people (aka all you developers) .. software cares about permissions (or “rights”). Stop coding to roles, start coding to permissions (or rights). The article above tackles the issue using the existing (available) objects in DotNet. Here is my custom IPrincipal solution, if the above workaround rubs you the wrong way. While I have “Role” methods, I never use them ***, except for the AllRoles collection, which I use to show humans what role they are in. However, I never show just the Roles, I show the Permissions/Rights, because that is more important. public interface IRolesAndRightsPrincipal : System.Security.Principal.IPrincipal { bool IsInRole(System.Guid role); bool IsInAnyRole(System.Guid[] roles); bool IsInAllRoles(System.Guid[] roles); bool HasRight(System.Guid right); bool HasAnyRight(System.Guid[] rights); bool HasAllRights(System.Guid[] rights); ISecurityRoleCollection AllRoles { get; } ISecurityRightCollection AllRights { get; } } (*** The one place I might use them (though I never have) is backwards compatibility when refactoring an existing application. If the current application is “all roled up” then that would be a stepping stone to getting to permissions/rights based security.) You might be saying “What is the HasAnyRight method all about?” Well, take for instance that you have a menu link called “Manage Employees”. This link will take you to a separate page that allows you to ADDNEW, UPDATE, and/or DELETE an employee. (These are 3 distinct permission(s)/rights(s)). So how do you decide if you should show this menu link called “Manage Employees”, since its not based on a single permission/right. And there ya go: Use the HasAnyRight() method menuItemManageEmployees.Visible = customPrinc.HasAnyRight ( /* throw the Guids here which represent ADDNEW, UPDATE, DELETE */ ); You show the link if the use has one (any) of the 3 permissions/rights. Then when you get to that new page, you show buttons/links/etc based on the individual permissions/rights. Side note, my concrete IRolesAndRightsPrincipal ( adaptly named “RolesAndRightsPrincipal” takes all your roles,rights in its constructor, and then becomes a look-up holder from that point on. Obviously, if you do on the fly permissions/rights changes, you’ll have to refresh it. Currently, I just go with a “You gotta re-login to get your fresh permissions/rights”, since the project (almost) never takes away any permissions/rights and seldom changes them. But you’ll have to conquer that design decision on your own. :::::::::Call to all Developers:::::::::::: If you’re developing a new project, and starting out with N number of hard coded roles, please stop. Please stop. I’m begging you, please stop. If you’re in a brownfield application, then ask your manager for some time to re-factor the security. If your manager is fair, then he/she will find some time for you that does not involve late nights or weekends. If your manager thinks re-factoring (without scope change) is a waste of time since “it works, so its OK”, then find a new job. This oldie-but-goodie article has the basic meat of a good solution. http://www.codeproject.com/KB/security/cgsecurity.aspx I would take his DDL, update it to your standards, and “framework up” his “Managers”(.cs code). It is OK to make a Role, but ONLY in the sense to logically group a set of permissions/rights. You should be checking permission(s)/right(s) when you actually are interested in the question “Can this IIdentity perform this certain thing?” Please stop coding hard-coded-roles into your application(s). The developers who have to maintain your code after you’ve left will thank you. .. //End Rant 2016 Update You will now want to code to “Claims”. You’ll create (at least) one System.Security.Claims.ClaimsIdentity. You will add 1 or more System.Security.Claims.Claim ‘s to this Identity. Then you will inject (or more) ClaimsIdentity’s to the ClaimsPrincipal. The (final) ClaimsPrincipal will consolidate all of the Claims into one master collection. Posted in Software Development | 2 Comments ## CruiseControl.NET / header.xsl / and DiskSpace I had a nice situation today. After a code check-in to SVN, CruiseControl.NET reported an error. The culprit?? No more disk space. The error is seen below. "Out of space" isn’t mentioned, thus why I am posting this. But it makes sense, the xsl transformation didn’t have enough disk space to handle the transform output. There was an exception trying to carry out your request. ## Exception Message Unable to execute transform: C:\Program Files (x86)\CruiseControl.NET\webdashboard\xsl\header.xsl (or) Unable to execute transform: C:\Program Files\CruiseControl.NET\webdashboard\xsl\header.xsl ## Exception Full Details ThoughtWorks.CruiseControl.Core.CruiseControlException: Unable to execute transform: C:\Program Files (x86)\CruiseControl.NET\webdashboard\xsl\header.xsl —> System.Xml.XmlException: Unexpected end of file while parsing CDATA has occurred. Line 384, position 106. at System.Xml.XmlTextReaderImpl.Throw(Exception e) at System.Xml.XmlTextReaderImpl.ParseCDataOrComment(XmlNodeType type, Int32& outStartPos, Int32& outEndPos) at System.Xml.XmlTextReaderImpl.ParseCDataOrComment(XmlNodeType type) at System.Xml.XmlTextReaderImpl.ParseElementContent() at System.Xml.XPath.XPathDocument.LoadFromReader(XmlReader reader, XmlSpace space) at System.Xml.XPath.XPathDocument..ctor(TextReader textReader) at ThoughtWorks.CruiseControl.Core.Util.XslTransformer.Transform(String input, String xslFilename, Hashtable xsltArgs) — End of inner exception stack trace — at ThoughtWorks.CruiseControl.Core.Util.XslTransformer.Transform(String input, String xslFilename, Hashtable xsltArgs) at ThoughtWorks.CruiseControl.Core.Util.HtmlAwareMultiTransformer.Transform(String input, String[] transformerFileNames, Hashtable xsltArgs) at ThoughtWorks.CruiseControl.WebDashboard.Dashboard.PathMappingMultiTransformer.Transform(String input, String[] transformerFileNames, Hashtable xsltArgs) at ThoughtWorks.CruiseControl.WebDashboard.Dashboard.BuildRequestTransformer.Transform(IBuildSpecifier buildSpecifier, String[] transformerFileNames, Hashtable xsltArgs) at ThoughtWorks.CruiseControl.WebDashboard.Dashboard.Actions.MultipleXslReportBuildAction.Execute(ICruiseRequest cruiseRequest) at ThoughtWorks.CruiseControl.WebDashboard.MVC.Cruise.ServerCheckingProxyAction.Execute(ICruiseRequest cruiseRequest) at ThoughtWorks.CruiseControl.WebDashboard.MVC.Cruise.BuildCheckingProxyAction.Execute(ICruiseRequest cruiseRequest) at ThoughtWorks.CruiseControl.WebDashboard.MVC.Cruise.ProjectCheckingProxyAction.Execute(ICruiseRequest cruiseRequest) at ThoughtWorks.CruiseControl.WebDashboard.MVC.Cruise.CruiseActionProxyAction.Execute(IRequest request) at ThoughtWorks.CruiseControl.WebDashboard.MVC.Cruise.CachingActionProxy.Execute(IRequest request) at ThoughtWorks.CruiseControl.WebDashboard.MVC.Cruise.ExceptionCatchingActionProxy.Execute(IRequest request) Posted in Software Development | 2 Comments ## Rule of 5 : Better machines for Developers I have just invented the "Rule of 5" when it comes to justifying a computer upgrade for a developer. 5 compiles an hour (when you’re actually developing code (like you love to do) and not in some meeting). Each compile saves 55 seconds**. 5 hours of development on average a day. (Hopefully more, but bear with me, I’m trying to use a "5" here.) 5 days a week. That is ~115 minutes/week of savings. ~2 hours a week of savings. Throw into the equation a developer hourly cost of some kind of middle ground amount of$50 (and another "5" of course!).

~$95.50 per week of savings per developer. Each new$1000 machine pay for itself in ~10.5 weeks.

Don’t forget about the morale factor, which is priceless.

=====================================

** 55 seconds compile time savings is an estimate of course.
But I actually conducted a real-world test on a solution I am working on right now.
I’d suggest taking a code base you currently have and perform a simple test.
When you build with Msbuild, you only need the code and not the Visual Studio IDE.

Test #1
No Previous Build. (no sln.cache file)
E6300 time : 00:01:10.23  ( 1 minute 10 seconds )
Q6400 time : 00:00:14.21  ( 14 seconds )

Test #2
Subsequent Build. (sln.cache file exists)
E6300 time : 00:00:51.43  ( 51 seconds )
Q6400 time : 00:00:07.55  ( 7 seconds )

Time savings:
No Previous Build. (no sln.cache file)
56 seconds savings.
Subsequent Build. (sln.cache file exists)
44 seconds.

Test machines were an older 1.86 Dual Core E6300 vs a 2.4 Quad Core Q6600.  Keep in mind this is an older Quad Core CPU.
My personal recommendation (based in price point at the time of writing this in 4th quarter 2010) is the i7-9

CPU Benchmarks:
http://www.cpubenchmark.net/common_cpus.html

Intel Core i7 950 @ 3.07GHz == 6,275
Intel Core2 Quad Q6600 @ 2.40GHz == 2,975
Intel Core2 Duo E6300 @ 1.86GHz == 1,113

Why didn’t I test with a i7-950?  That’s the point, I don’t have one!!

Give your developers more Cores !
A.  Because the code can compile faster.
B.  Because most production machines have (at the very least) 4 CPU’s/Cores.

__CPU’s/Cores
Because of advances in Parallel Computing, the minimal core number requirement is 2.
However, the number of CPU’s should be closer to the production servers.
Recommendation.  4 Cores is recommended because most modern production servers have at least 4 CPU’s.
(The connection here is that with Parallel Computing, developers will be able to actually take advantage of extra processors/cores.

In fact, Microsoft has invented a new namespace in DotNet Framework 4.0 to allow easier coding against this model.

__Visual Studio IDE as it relates to the build process (faster builds = more efficient developer)
http://msdn.microsoft.com/en-us/library/bb383805%28v=VS.100%29.aspx
Visual Studio 2008 and 2010 can take advantage of systems that have multiple processors, or multiple-core processors. A separate build process is created for each available processor. For example, if the system has four processors, then four build processes are created. MSBuild can process these builds simultaneously, and therefore overall build time is reduced. However, parallel building introduces some changes in how build processes occur. This topic discusses those changes.

Fourth Quarter 2010 : Machine Recommendation (Leveraging Frugality vs Performance)
• Genuine Windows 7 Professional 64-bit
• Intel(R) Core(TM) i7-9
50 quad-core (or i7-930 quad-core but the newegg difference is only $10 (at the time of writing) • 6GB DDR3-1333MHz SDRAM • 1TB RAID 0 (2 x 500GB SATA HDDs) • 1GB ATI Radeon HD. A version that supports dual monitors. Why the i7-950? There is usually a pretty clear cut "jump" that takes you out of "very, very good and reasonable" to "super great and a lot more expensive". http://www.cpubenchmark.net/high_end_cpus.html Just start walking "down" until you get to around$300 or under.  the i7-950 (at the time of writing) was the one sitting at the top of the best of the reasonable.  There are processors above the i7-950 that are at the $570 to$1600 range.  Youch!  Of course, by the time I click "publish" that list will become out of date, so you just gotta walk the list until you see the clear cut "very, very good and reasonable" winner.

Supporting URLS:

http://www.joelonsoftware.com/articles/fog0000000043.html
9. Do you use the best tools money can buy?
Writing code in a compiled language is one of the last things that still can’t be done instantly on a garden variety home computer. If your compilation process takes more than a few seconds, getting the latest and greatest computer is going to save you time.
Debugging GUI code with a single monitor system is painful if not impossible. If you’re writing GUI code, two monitors will make things much easier.

(A personal note about the above comments, people who use Word/Excel don’t need the superpower, they’ll do fine with something more mainstream.
Code Compile == More Horsepower)

(The other personal note, Joel On Software makes for some good all around How-To-Develop software well.)

http://www.codinghorror.com/blog/2006/08/the-programmers-bill-of-rights.html
Every programmer shall have two monitors
Every programmer shall have a fast PC
Every programmer shall have their choice of mouse and keyboard
Every programmer shall have a comfortable chair
Every programmer shall have a fast internet connection
Every programmer shall have quiet working conditions

//Quote from codinghorror article//
It’s unbelievable to me that a company would pay a developer $60-$100k in salary, yet cripple him or her with terrible working conditions and crusty hand-me-down hardware. This makes no business sense whatsoever. And yet I see it all the time. It’s shocking how many companies still don’t provide software developers with the essential things they need to succeed.
//End Quote//

And don’t forget about the dual monitors!

"Survey after survey shows that whether you measure your productivity in facts researched, alien spaceships vaporized, or articles written, adding an extra monitor will give your output a considerable boost 20 percent to 30 percent, according to a survey by Jon Peddie Research."

http://research.microsoft.com/en-us/news/features/vibe.aspx
Microsoft researchers haven’t perfected the genie, but they’ve found a tool that can increase your productivity by 9 to 50 percent and make your work day easier.
The researchers conducted user studies that proved the effectiveness of adding a second or even third monitor to your workstation, creating a wide-screen effect.

Minimum Suggestion: Dual 20" (or 20.5") Monitors.
Suggestion: Dual 22" (23" or 24") monitors.

Posted in Software Development | 1 Comment

## SVN (Subversion) case sensitivity issue with Linux (RedHat) SVN Server and Windows Client

I had a weird subversion (svn) issue today.

Here are some clues (for future googlers).

"svn: Can’t open file" ".svn\tmp\text-base" "svn-base" "The system cannot find the file specified".
svn: Can’t open file .svn\tmp\text-base svn-base The system cannot find the file specified

After backtracking, here is what happened.

I had a file (a compiled dll) in source control.
I found out that I had a non-signed version of the dll (DotNet assembly).
I found the "signed version" of the dll.  I wanted to replace the unsigned version with the signed version.

So I replaced the existing file (in my local directory) with the signed version.

There was a slight issue with the file name.

The unsigned version was
MyAssembly.dll

The signed version was ~~~slightly off (case wise):
MyAssembLY.dll

(The file names are obviously for demonstration only).

So when I overwrote MyAssembly.dll with MyAssembLY.dll, it was fine (local windows directory).
The files were "Committed to Subversion".

(FYI, You can ignore the "signed" vs "unsigned" issue I cite above if you’re having this issue…..
the issue was not
related to signed/unsigned, but rather cAsE SenSiTiviTy of the replacement file).

Ok…so I thought everything was cool.  WRONG.

The next time my continuous integration ran, I got the error message seen above.  ("svn: Can’t open file).

I mimicked the command line call ( to take the CI environment out of the equation).

and got the same error message.

The fix?

I deleted all the files in the folder via the repository browser.  (Aka, cleared them OUT of svn).
I uploaded the signed assembly.  (I did this via "Add" and "Commit" with the local tortoise shell).

Please note this is NOT ideal, because of revision history.  I was lucky in that for this branch, history retention was not a priority.

Now the case of the files on the server (svn) matched the case of my (signed) assemblies(dll’s).

Here is the thread that gave me the clue:
http://issues.apache.org/jira/browse/STDCXX-14

Server:  CollabNet for RedHat SVN Server.  (Aka, a case sensitive environment)
Client:  CollabNet svn client for Windows.