:::: MENU ::::
Browsing posts in: Church IT

Job Opening in OKC

Crossings Community Church in Oklahoma City is expanding their MIS Team.  Michael Foster would be a great guy to work with and from the Spring CITRT,rossings appears to be a thriving place to serve.

Network Administrator

Scope of Position: Assist in supporting the stable operation of the in-house computers, servers and IP network infrastructure as well as other information systems applicable to Crossings Community Church.

Organizational Relationships: While this position will interact with most, if not all, of the departments of the church, the Network Administrator shall report to the Director of Management Information Systems (MIS).

Responsibilities: The Network Administrator’s role is to enable the ministry of the church with information technology hardware and software. This includes planning, developing, installing, configuring, maintaining, supporting, and optimizing all network hardware, software, and communication links. The person will also analyze and resolve end user hardware and software computer problems in a timely and accurate fashion, and provide end user training where required. This person will also work closely with the MIS Director to optimize all areas of the church’s information systems.

Skills Required for Position: Working knowledge of a Microsoft Windows domain and networking. Proficiency with Microsoft Office, desktop operating systems, and a strong willingness and ability to learn.

Education and Experience: The Network Administrator should hold a vo-tech or college diploma, at a minimum, and should have a minimum of 1 to 2 years experience in working with computers and various information systems components. A+ Certification or equivalent knowledge is required and any Microsoft or Cisco certifications will be beneficial.

If you are interested email a resume to Michael Foster (mfoster at crossingsokc dot org)

Exchange Storage Limits?

So do you use Storage limits on your Exchange Server?  If so what are they?

We have imposed fairly conservative limits on our users, not because of space but rather because of DR planning…. We got burned 2 years ago during a Exchange Server crash and there we no limits on the storage… so the reaction was to impose limits to prevent the nightmare of some of the mailboxes from not happening again… Today I am wondering if those limits are “fair”.

So just doing some research, what are your limits and why?

Rolling out a Monster Rig

As part of our OSX and AD Integration we are rolling out a new video production rig to our media dept.  This project has definitely been more than just purchasing hardware and dropping it on a desk and walking away, but learning how to serve our Mac users in the process (allowing those users to start to become equal citizens on the network) has been a good adventure.


This new mac is a Quad Core and so far the beast works really well.
The Hardware: Quad Core 3 ghz Mac Pro with 10gb of memory, Dual 24 inch Dell Displays, Dual SuperDrives, a 250gb primary volume and 1TB scratch disk and 250GB TimeMachine Disk, 12×6 Intuos Tablet. 
The Software: Final Cut Pro, and Adobe Production  Premium.  Beware FCP takes about 6 hours to fully install. (We went with Adobe’s Production Suite even though we don’t use Adobe Premier since it was MUCH cheaper than purchasing Design Premium and then adding After Effects… Thanks to Nancy at CCB for that little nugget.)

Dave our Media director has been VERY eagerly anticipating the arrival of his new ‘baby’, and we delivered the hardware to his NEW office on Friday… I wish he would show some excitement :) Thanks to our campus services team Dave has a mondo huge keyboard tray that even holds his tablet and keyboard… 


Stop grinning Dave and start cranking out some more cool videos!

Cloning the Macs

Don’t think that I am a proponent of making the Macs on our network multiply, but rather making those on our network look the same… One of the keys to our AD and Mac integration was reducing the time it takes to deploy a mac on our network.  Earlier this year when I had to reinstall a mac book pro it took over 6 hours to install the base software and drivers (items that every user gets) in addition to installing the components that each specific user needs.  Knowing this was taking way too long I was on a quest to make this less painful.

Enter Apple Server’s System Image Utility… The Image utility allows for you to create a base system, prepare it for deployment over the network and distribute it to similar clients (Intel or PPC).  There are several options for creating the image 1. Pull the image from DVD or 2. Clone an existing machine.  The benefits of creating the image from DVD are a clean from factory default installation that can deploy fairly quickly over the network… We however we elected to clone an existing machine.  The clone allows us to add all the base software and drivers and then push that image with the base software already installed to another machine.

How its done.

  • Install from DVD on to a machine that is the same vintage of processor as the machines that you plan to deploy the image to.
    • In our case we have both PPC and Intel so we started making an image of each on two separate machines. 
  • After the OSX installer is complete update the OS from Apple Updates add the software you want included and the preferences you would like configured. 
    • In our case our base install includes: Mac Office 2008, Canon PS and UFR II Drivers, Disabling the onboard Bluetooth, Disabling the .DS_Store for network volumes, Disable Guest Account Access. (DO NOT INSTALL SYMANTEC BEFORE YOU IMAGE THE MACHINE… for some reason this causes the image to fail)
  • After the base software and drivers are configured, go to Disk Utility and run permissions repair.
  • Capturing the image can only be done on a secondary volume from where you are installing the OS.
    • If the install is on a primary volume, you will have to boot the device in target mode from the Startup Disk System Preferences.
    • If it is installed on a secondary volume you can boot to the primary volume to capture the image.
  • On a machine with the OS X 10.5.3 Server Admin Tools installed (downloaded from  http://www.apple.com/support/downloads/serveradmintools1053.html or off the OS 10.5.3 Server Disk) Start the System Image Utility and it should find the volume you just created and updated.
    • Select the Volume you want to image and choose netinstall and select customize.
    • Add Enable Automated Installation and Create Image to your workflow then configure where you want to save the output files, select an index for each image you will create and choose RUN.
  • After an hour or so the location you selected will have a folder/file ending in .nbi

How to Deploy the image: (Our configuration)

  • Enable the NetBoot service in the Server Admin Console
  • Next Configure the NetBoot Service by going to the settings
    • On the General Tab Enable which device you want Netboot to run on (Ethernet)
    • On the General Tab Select where you want to store the Images (Volume 2 for both Images and Client Data)
  • Copy the .nbi to the location where you told the Netboot service to save the data.  /NetbootServiceLocation/Library/NetBoot/NetBootSP0
  • Next Configure the Images on the Server Admin>Netboot>Settings> Images Tab
    • Enable the Image you would like to NetInstall from.
    • Select the Architecture that you would like to use this volume.
    • Restart the NetBoot Service


How to boot the Machine and install the image:

  • While booting the device press the ‘n’ key or select the network Network Volume in System Preferences>Startup Disk
  • When the device boots a little world will display and then the machine will indicate that it is recovering a system image.


After the Image is restored, the machine will rename itself and add a digit to the end, so you can install this on as many machines at the same time and not worry about the issues you might have without running Sysprep on a Windows machine.  Simply rename the machine in System Preferences> Sharing and change the name and the local hostname.

Kudos to ACS OnDemand

During our recent outage Dean Lisenby and the Team at ACS came thru for us again.

I contacted Dean after hours to alert him to our outage and asked how ACS could help us have access to our data quickly in a read only format.  I quickly received a text message that he would contact me after his kids went to bed… (Family First for sure!) I received another text message about 25 minutes later asking which back we wanted to use and then another text message 10 minutes later saying to check my email.

The email gave me 5 user names and passwords to access our data via the ACS OnDemand system.  Dean had recovered our latest backup and we were able to access the data.

We were able to communicate to our staff which users had access and how to use the system by the next morning.

We elected to just use this installation as a reference, since our plan was to bring the production database and server back online as soon as the hardware was repaired.

Even though it might have appeared as minor to bring us online for just a day with data from a backup that we weren’t going to keep ACS made this a priority.

Thanks ACS!  And Thanks to Stacy Kennedy; Mark Williams; Darlene Winborne; Sylvia Watts and Dean Lisenby!


Knowing now that this option is rock solid and so quick to implement I am adding this as part of our DR Planning in the future.  My only complaint would be to deploy this solution to all our staff would be fairly cumbersome… but then hopefully the next time anything like this happens we will have AccessACS out of testing and into production.

Continuing the Mac theme

 So today the launch date of July 11th was announced for the iPhone 2.0… And everything I have been hearing is it is fully compatible with Exchange 2003 and Exchange 2007…. Sooooo, I am posting here what we communicated to all our users to make sure there is no confusion… :) the following was communicated to our users today:

Last fall we communicated what mobile devices would work in combination with our network to synchronize your calendar and email over the air.  A new device will soon be added to that list, the Apple iPhone 3G is being released on July 11th.  The new version of the iPhone includes the ability to use Microsoft® Exchange ActiveSync®, and is compatible with our servers. 
As before this requires your cellular phone account to include data services and our support of mobile devices only extends to the configuration of your device connecting to our servers.  All other data issues must be resolved by you and your carrier. 
Since none of us in the Information Technology Office have an iPhone we will not be able to pre-test and configure this new device.  This means there may be some issues that we cannot be aware of and resolve prior to helping you configure an iPhone 3G to connect to our network. If you plan to purchase an iPhone 3G, I strongly encourage you to check with your reseller to confirm the return policy is at least 15-30 days to allow enough time for us to get the device configured and still  allow you to have enough time to try out the device and decide if it performs as you expect it would.

Do you Use PCO Follow-UP

After my recent post do you use PCO and ACS? Michael Foster asked

  • Michael Foster

    05.29.2008 9:29 am

    We use PCO which I just found out about last week. Where is the documentation for the API? I’d like to see about getting it integrated with Fellowship One. I browsed around the forum and it looks like there are some other people inquiring on this feature.


    To get information from PCO about the API you are suppose to email support@ministrycentered.com 

    We are releasing the first part of our API for developers of other programs to begin to integrate with PCO. If you are interested in this functionality please e-mail:

    Here is the info I received, the included link has a good amount of documentation for the API.


    Thanks for your interest in the API! We have been working hard on getting our API ready and we are excited to see what you are going to do with it.

    Right now the API is considered to be in beta and we are working on rolling out the API for each section of Planning Center separately. The first part of the API that is available immediately is the People API, then in the next couple of weeks we will be releasing the Songs API. Once we are sure those are meeting the needs of our customers we will be releasing a Plans API.

    You can get access to the API documentation at:


    If you have any questions or comments please feel free to contact us.


    Jeff Berg
    Ministry Centered Technologies

    do you use PCO and ACS?


    I am looking to connect with others who use both Planning Center Online and ACS.  This last week Planning Center Online released an API to allow potential pushing and pulling of contact information out of their data. 

    We have since we started using PCO been putting pressure on Jeff Berg and his team to allow us to synchronize the data sets between ACS and PCO.  Our creative arts team has raised the flag that having ‘people/contact’ information in two sources is becoming very very burdensome.

    This makes me wonder what interest exists in a co-op to develop a sync tool for those who are using PCO and ACS. 

    So the PCO API exists….  This is just another one of those wishing for more out of the tools that we have and urging those at ACS to help us develop the best tools possible.  ACS, how can we make this a reality? 

    48 Hour Work Night? Follow-up

    After my recent post about our 48 hour work night Matthew Irvine asked what was the ‘Plan B’ I mentioned.  I have to admit we didn’t have a pre formulated ‘Plan B’.  When we realized that the recovery of the backup to a new virtual disk wasn’t going to work we put things back as they were and stopped the SAN expansion so we could go home and sleep on our problem.  Jeremie remounted the iSCSI volume and we left the file server in its original state.

    Now for the later contrived ‘Plan B’, we used DFSR (Distributed File System Replication) to replicate the entire file server’s iSCSI drive (drive e:) to another new clean and healthy file server. 

    Note that the R2 variety of DFS is much much better  in what it can do and how it is configured.  DFSR actually allows for file replication without the creation of a Namespace first and the replication is much better that the previous version.  (From what I have read and Chris Green tells me). 

    Installing DFSR requires R2 being installed, and in our case the file server required us to install DFSR from the W2k3 R2 disk 2.  Even after the install our file server didn’t display the management console, but lucky for us you don’t have to have the management console functional on both servers to replicate the data.  After we installed DFSR on the new server we were able to set up a job to replicate both servers.

    So our ‘Plan B’ is currently to use the newly replicated file server as our primary server while we work thru the expansion of the SAN.  One great feature with DFSR is that it not only replicated the files but also the permissions (albeit the permissions we inherited from years of previous use and we will be cleaning up the permissions mess in the future weeks).  Our plan is to remove the replication job, power down and decommission the old file server and name the new file server with the same name.  We have decided to keep the same server name because our laptop users use offline synchronization and changing the server name is easier than reconfiguring the offline sync configurations on each laptop.

    After our SAN is in its newly grown state we can replicate the files back to the SAN from the virtual disk with the same process all within the same Virtual server….

    One interesting feature we must research a bit more is using a namespace for the file server…  maybe DFSR is a potential topic for Chris Green to ‘present’ during a future CITRT Podcast.

    That’s the plan, and maybe we’ll stick to it.