In this blog I am going to talk about cloud computing and explain exactly what it is.

What is the Cloud?

The Cloud, much like Web 2.0 is a term that has been given to many different technologies all grouped together. It’s a term which managers and marketing people like to use as it is the current buzzword of IT. In reality it is a shift back towards mainframe/centralised computing. The main driving force of this has been the start of computer virtualisation. We have got to a point where server hardware is far more powerful than you need to run the majority of systems. This resulted in servers sitting idle and not doing any work. Virtualisation is the process where you can slice up a physical server into multiple virtual servers. These can be running multiple different operating systems all at the same time on the same physical server. This means instead of having servers using 15% of their resources on average you can up this to 80-90%. In one company we managed to reduce 70+ physical servers down to 6 physical servers using virtualisation. This technology is being heavily used by people such as Amazon to provide their Web Services. Services such as Amazon let customers quickly provision additional servers as required. For example, if a company has a new launch happening which will mean they will be having a far greater demand on their website than the normal servers can cope with, additional servers can be started on the Amazon servers and the load spread across them all. Once the demand has dropped off these servers can be removed.

The next side of cloud computing is providing software as a service. This is where instead of installing software on your local computer, programs are run through a web browser and delivered over the internet. The best example of this is Google Docs which allows you to have a fully functional replacement to Microsoft Office run from inside your web browser.

Essentially cloud computing is the process of utilising other peoples hardware to run your systems or shifting the management of your software to another company to run for you.  

Why is it important?

The best way I have heard the importance of cloud computing described was at an Ubuntu Cloud event. They started out by talking about the early years of electricity. In these early years people who wanted electricity would have to have their own generators at home if they wanted electricity. As time went by the national grid was setup and electricity was turned into a commodity. At this point rather than producing their own energy people just paid for it as a service. This process has been repeated for many different innovations in the past such as telephones.

This relates to computing as up until now the computer industry has been an extremely new one and has still been in the innovation phase of its existence. At this point in time we are finally at a switch over point where computing is going to evolve into a commodity. There is no longer any need for people to constantly update hardware and software where this can now all be maintained at the supplier end. All people will need is a simple machine and monitor and all the hard work will be performed on the servers of suppliers rather than on people’s local computers. These can be upgraded constantly without ever affecting customers.

The other main reason for the switch is to save money. As suppliers can run huge datacentres to provide software they can benefit from huge economies of scale and thus the price of running a system will drop. Also without the need to run servers locally huge amounts of money are saved on energy and hardware costs. You also get the resilience of being able to run your servers from multiple data centres around the world, or have your files backed up to multiple locations in different continents.

What will it mean in the future?

So what will your system look like in the future? At the moment the main innovation that is coming out soon is Google’s Chrome OS. This is essentially an entire operating system wrapped around Google’s Chrome Web Browser . This will automatically store all of your files on Google’s servers and whenever you login to any computer running Chrome OS you will get your own interface and files.

This video made by Google explains it nicely:

Another example of Cloud computing innovation is a company called OnLive. They allow you to stream computer games over the internet. This means you don’t have to have a cutting edge PC to play all the latest games. They spend all the time setting up their systems to get the best graphics and then just stream all the images to your local computer over the internet.

This link has a video which explains how it works: http://www.onlive.com/service/cloudgaming?autoplay=force

Thanks for reading this blog and feel free to contact us if you would like to find out how we can help you with Cloud Computing.

Author: Luke Whitelock

In this day of age lots of people have portable devices. Along with laptops there are also tablets and smart phones that users choose to bring into work and use. This is called Bring Your Own Device (BYOD). BYOD can make things more convenient for the user by giving them preference over what hardware and software they use. However, this gives the company issue for these reasons:

  1. Can introduce viruses onto the network: Devices that belong to the company will be heavily regulated by the IT department. The computers will have full anti-virus software installed on with strictly controlled firewalls and website blocking to prevent any viruses getting on the network. A user’s own device isn’t heavily regulated. They may have anti-virus on their device (they’d be silly not to) but this won’t necessarily stop them getting infected. If the device is being used for personal use only then it won’t be a problem for the IT department or the company. But when the user brings in the device to work, connects it to the network and start’s sending out emails to colleagues then you could see the office quickly becoming contaminated with viruses. This will be a nightmare for the company as it will take a while to clear the computers of viruses and get the office back online. This brings me to my second point.
  2. Compromising of company data: With the rise of cyber-crime, company data has become a valuable and tightly guarded asset. All this data will be kept on encrypted servers behind firewalls and locked doors. All these measures would make it very hard for a potential hacker to get to the data, but with BYOD this can all be compromised. If a user does some work on sensitive data from their own device then they could compromise by having little safe guards against hackers but also, should they take the data out of the company building, they could compromise the data by accidentally leaving their device in a place that could be stolen or the person accidentally loses it in a public place (that has happened before with Government officials leaving laptops and USB sticks on the train).
  3. Technical Issues: With company devices, everything is uniform. All the computers would have the same software, hardware and applications which make it easier for IT support to fix potential issues. With BYOD you’ll find lots of different devices with different specifications. This makes it a lot harder for IT support to fix issues due to being unfamiliar with technology and the quagmire of apps the user would have on their device. It also issues with hardware and software compatibility. For example: a user brings in their MacBook Pro to do work on and uses an Apple application to do work. He then sends this work to his colleagues (who are using Windows based devices) and they are unable too view his work because they haven’t got the software to view the file.
  4. There are ways round the problems cause by BYOD. One of these is the use of VMware Virtual Machines (VM’s).
    The use of VM’s could completely negate the risks of BYOD if it’s implemented properly. This is how it would work: The user would bring in their device to work. Once in they would setup the device, connect to a public network (this would give them internet access without putting the, potential infected, device on the same network as other, more secure devices on the company network). Once connected to the network they can open up startup their company VM and do their work on the VM.
    By doing their work on the VM over a public network then this would mean the device isn’t in contact with any other company device on a private network. That way, viruses won’t be able to spread to the company devices and infect them.
    Also, by using VM’s, company data can’t be physically taken off site. If the user takes their device off site then the company data won’t leave with them because it will be on their VM. That way, should they lose their device or it gets stolen, no untrusted third parties would be able to view the data as they would not have access to the VM.
    The use of VMware VM’s also cuts out the issue of troubleshooting and hardware & software compatibility issues. The VM’s can be created from templates meaning that all users would be using the same applications whilst working from their own devices. Should a problem occur with the VM then IT support can fix quickly (and easily) from their own computers.

There are many parts that cloud computing encompasses. One of those is Virtual desktops. A virtual desktop is one of many operating systems that are run off centralized servers that pool their resources together. A user would connect to the virtual desktop remotely from a device that has an internet connection. The VDI (Virtual Desktop Infrastructure) uses the server’s hardware to do all the processing and saving of data. Therefore, your own device is only used as a gateway to your virtual computer where all the work is done. How can they benefit me? A virtual desktop can benefit you in many different ways depending on your requirements. Below is a list of ways explaining how:

Cost:

One way they can benefit you is by saving you money on electricity. By using a virtual desktop you save your computer from having to do all the processing that it would usually do. Therefore, you will save money on electricity. Tied in with the last point is saving money on computer maintenance. With all the processing moved to the cloud, your computer will be running on minimal process. This would mean that your hardware will not get worn out as fast as a normal computer’s would. Also, being that a VDI is on a centralized server means that it can be easily accessed by a computer technician. This means they can diagnose and fix the problem faster than fixing conventional problems.

Flexibility

A great reason for using a virtual desktop is to extend the capabilities of what you can do on your PC. What I mean is that, unless you have a top of the range PC then you won’t be able to run everything and even then, depending on the operating system you are using, you won’t be able to run OS specified applications. For example, Say your computer is running Windows 7. It does the majority of things that you want to do. However, for work you need to use an application that you can only use on Apple Mac’s to get your work done. With a virtual desktop you could have VDI with Mac OS installed on it in a short period of time. Also, with a VDI, they can be changed on the fly. Things like memory, upgrades, applications etc. can all be added while VDI is in use. This means there is far less down time is needed for maintenance and to upgrade the VDI.

Connectivity

A great thing about Virtual Desktops is their ability to be used anywhere. By having your virtual desktop in the cloud, all you need to access it is a PC with an internet connection. This would be very handy for someone who works from home or moves around a lot for work. As long as they have a PC and an internet connection they would be able to access all their documents, emails, and applications etc. from the virtual desktop.

I have decided to write this article after spending weeks troubleshooting Exchange 2003 on Small Business Servers with various active sync issues.There are many issues and many solutions, I will try to describe the majority of problems which I have come across and troubleshooting steps. 1: Check your Exchange services are started ( default services settings)

Microsoft Exchange Information Store MSExchangeIS Auto
Microsoft Exchange Management MSExchangeMGMT Auto
Microsoft Exchange Routing Engine RESvc Auto
Microsoft Exchange System Attendant MSExchangeSA Auto
Microsoft Exchange Event MSExchangeES Manual
Microsoft Software Shadow Copy Provider swprv Manual
Microsoft Exchange IMAP4 IMAP4Svc Disabled
Microsoft Exchange MTA Stacks MSExchangeMTA Disabled
Microsoft Exchange POP3 POP3Svc Disabled
Microsoft Exchange Site Replication Service MSExchangeSRS Disabled

2: Service pack 2 for Exchange 2003 Not a part of standard update to download follow this link: http://goo.gl/of9DE. To check if you have service pack installed, go to Exchange System Manager, right-click your server and go to properties. This is required by iOS 3: Check that your firewall allows traffic on port 443 4: Check the Exchange network connection is at the top of the list in advance network settings 5: Check directory security in IIS Exchange Virtual Directory

  • Authentication = Integrated & Basic
  • Default Domain = NetBIOS domain name – e.g., yourcompany*
  • Realm = yourcompany.com
  • IP Address Restrictions = Granted Access
  • Secure Communications = Require SSL IS ticked (very important)

Microsoft-Server-Activesync Virtual Directory

  • Authentication = Basic
  • Default Domain = NETBIOS domain name – e.g., yourcompany*
  • Realm = NETBIOS name
  • IP Address Restrictions = Granted Access
  • Secure Communications = Require SSL and Require 128-Bit Encryption NOT ticked

Exchange-oma Virtual Directory

  • Authentication = Integrated & Basic
  • Default Domain = NETBIOS domain name – e.g., yourcompany*
  • Realm = NETBIOS name
  • IP Address Restrictions = Restricted to IP Address of Server
  • Secure Communications = Require SSL and Require 128-Bit Encryption NOT ticked OMA Virtual Directory
  • Authentication = Basic
  • Default Domain = NETBIOS domain name – e.g., yourcompany*
  • Realm = NETBIOS name
  • IP Address Restrictions = Granted Access
  • Secure Communications = Require SSL and Require 128-Bit Encryption NOT ticked

Public Virtual Directory

  • Authentication = Integrated & Basic
  • Default Domain = NetBIOS domain name – e.g., yourcompany* (no more than 15 characters)
  • Realm = yourcompany.com
  • IP Address Restrictions = Granted Access
  • Secure Communications = Require SSL IS ticked (very important)

6: ASP.Net version should be set to 1.1 for all the above directories 7: HTTP keep-alives needs to be enabled Under Default Website Properties on the Web Site tab check if the http keep-alives is enabled 8: Advanced Website Identification Make sure this is set to All Unassigned with port 80 on the Web Site tab 9: Uninstall IPV6 Make sure IPV6 is not installed, if installed uninstall 10: SSL certificate Make sure the name on the certificate matches fully qaulified domain name for the ActiveSync, check on the Directory Security tab, View Certificate 11: import ssl certificate For the iPhone and windows mobile you will have to import the certificate to the phone, easiest option is to publish the cert on the web, then navigate to it with your mobile browser and install the certificate Godaddy Certificate Instructions To Install Your SSL in Microsoft IIS 6

  1. From the Start menu, click Run….
  2. Type mmc and click OK. The Microsoft Management Console (Console) opens.
  3. From the File menu, click Add/Remove Snap In.
  4. Select Certificates, and then click Add.
  5. Select Computer Account, and then click Next.
  6. Select Local Computer, then click Finish.
  7. Click OK to close Add or Remove Snap-ins.
  8. In the Console window, expand the Certificates folder.
  9. Right-click Intermediate Certification Authorities, mouse-over All Tasks, then click Import.
  10. In the Certificate Import Wizard, click Next.
  11. Click Browse to find the certificate file.
  12. In the bottom right corner, change the file extension filter to *.p7b.
  13. Select the appropriate certificate file and click Open.
  14. Click Next.
  15. Select Place all certificates in the following store.
  16. Click Browse, select Intermediate Certification Authorities, and then click Next.
  17. Click Finish.
  18. Close the Console window.
  19. From the Start menu, go to Administrative Tools and click Internet Information Service console.
  20. Right-click the website or host name for your certificate.
  21. Click Properties.
  22. Click the Directory Security tab.
  23. Click Server Certificate..
  24. The Welcome to the Web Server Certificate Wizard window opens. Click Next.
  25. Select Process the pending request and install the certificate, and then clickNext.
  26. Click Browse. Select all files, and select your certificate file.
  27. Click Next.
  28. Verify the Certificate Summary, and then click Next.
  29. Click Finish.

This is another situation where a migration to our Cloud Email service would remove the pain of managing an aging SBS 2003 or 2008 system. Have a look around our website to see how our services can make your life easier. Usefull tools http://goo.gl/BA7ZJ http://goo.gl/Xsx3t Resources used http://goo.gl/rpqBh http://goo.gl/p2uwN http://goo.gl/Hm81o

We have just booked a stand at the Richmond Business Expo on the 19th of April from 1pm to 8pm. Come and visit us there! We will be providing free advice to businesses on how the latest advances in IT can help their business.

With Microsoft’s new Operating System, Windows 8, preview out I decided to have a go at creating an app for it. The app I made was a simple “Hello World” app in which you type your name into an input box and when you press enter the app says hello to you. To do this you need to have Windows 8 preview and Microsoft Studio Express 2012 RC for Windows 8. I shall talk you through the steps to make this app in this blog. 1) Creating the App The first thing you need to do (after installing Windows 8 and Microsoft Studio express 2012 RC) is to create a blank app. To do this open up Studio Express, click “File” and then click “New Project”. A menu will pop up with options for different apps. Select “Blank App” and name it “HelloWorld”. When you are done click “OK” and you will have created a blank app for you to play with! 2) Creating Your Start Page Now that you have a blank app we now need to set up your start page. The start page is what will appear when you start up your app. At the moment, if you have left the default code, when you click the start button your start page will be a black screen with “Content goes here” displayed. To change this you need to go to the default.html file and replace the code for “Content goes here” with the following code (*NOTE: leave the CSS code alone. Can use it later to set the background):

<body>
    <h1>Hello, world!</h1>
    <p>What's your name?</p>
    <input id="nameInput" type="text" />
    <button id="helloButton">Say "Hello"</button>
    <div id="greetingOutput"></div>
</body>

This should create a start page with the Heading “Hello, world!” with a sentence under saying “what is your name?” along with an input box and a button. 3) Creating and Registering Event Handler for the App To get your Hello World to work you now need to go to the default.js file and create an event handler for your app. This is where you write the code to provide interactivity for your HTML code. Leave any default code in there and in the boundaries of the function type in the following: function buttonClickHandler(eventInfo) { var userName = document.getElementById("nameInput").value; var greetingString = "Hello, " + userName + "!"; document.getElementById("greetingOutput").innerText = greetingString; } Now that the event handler has been created you need to register it. To do this you need to add the following code app.onactivated section of the default.js file: var helloButton = document.getElementById("helloButton"); helloButton.addEventListener("click", buttonClickHandler, false); Now if you’ve done it right then when you start up and type in your name and click then button then it should display “Hello, *yourname*!” If this works for you then congratulations! You can now expand on what you have done here like changing the background using CSS or add different functions so that you get different messages appear with different names. I hope you found this blog useful.

Hello everyone, over the past week I have been migrating alfresco from my server to a virtual server on the cloud. I shall run you through the steps I took to successfully set it up. 1) Preparing the new server for Alfresco The first thing I did before touching my old server was to set up my VServer and make it ready for alfresco. This involved downloading alfresco community 4.0d and postgreSQL Admin 3. For Alfresco, once it has been installed on the server, turn off tomcat and don’t go into the setup process. All the config files will be migrated over so you shouldn’t have to config alfresco at all on your new server. Leave Postgres running though. This will be needed for restoring the alfresco database through pg_admin. You will also not be able to set up pg_admin if the postgres port is closed so leave it on and open. 2) Backing up the Alfresco Database Once the new server was all prepared for alfresco I went back to my old alfresco server and went about backing up my database. This contains all the files, users, settings e.t.c. for alfresco. Before you do this make sure you have turned off tomcat, using the alfresco manager tool, otherwise people we still be able to upload files to alfresco and we don’t want that. To back up my alfresco database I used the pg_admin tool on the server, right clicked on the alfresco database and clicked backup. What options you choose are up to you but make sure you back it up into alfresco\alf_data\backups folder as the the alf_data folder is what you’ll be migrating over. 3) Backing up the TomCat extension’s folder Once your database is backed up move on to backing up the tomcat extenions folder. This is located at alfresco\tomcat\shared\classes\alfresco. This folder contains all the scripts and config files for tomcat. Back up the extension fodler to alfresco\alf_data\backups folder for migration. 4) Compress the alf-data folder and migrate Once everything has been backed up to the alf_data folder you are ready for migration. Before I migrated the folder I compressed it. This makes it a smaller size and makes it easier to transfer to the new server. To compress the folder just right click alf_data, go to send to and click compress. This makes the folder ready for transferring. Whatever method you choose to transfer the data is up to you. Just transfer it to your Alfresco folder. 5) Setting up Alf_data Having transferred your compressed alf_data folder to the the alfresco folder in your new server, unzip the folder and move the alf_data folder into the alfresco folder. Make sure you rename the alf_data folder to something else otherwise they will conflict and alfresco won’t work. Also it gives a restore point should everything go wrong but it is up to you whether or not you delete them or just rename them. 6) Moving the backed up TomCat extensions folder This step is pretty similar to the last. Just move the the extensions folder to Alfresco\tomcat\shared\classes\alfresco and rename or delete the extentions folder already there. Make sure the extensions folder is actually called extension otherwise alfresco won’t work. Also, do a quick check search to make sure there isn’t any reference to the old server in the files as alfresco won’t work on the new one. If there is just change it to the new server. 7) Resorting Alfresco database This step can be rather tricky and complicated. Theoretically you should be able to restore the database using pg_admin. Just right click on the alfresco databse already in pg_admin, choose the backup from the alf_data backups folder and click restore. When I did this it didn’t work so I reverted to using command line. To do this open up command prompt and follow these steps: 1) Using the CD command to make the backup folder you current directory. cd c:\alfresco\alf_data\backup 2) Run the psql program to establish a connection with the database (-d), the super user (-U) and the directory path to the file to be restored (-f). psql -d postgres -U postgres -f c:\alfresco\alf_data\backup\(name of backed up database) You’ll then be asked for the password for the database. Once entered the database will be restored. After this is completed then you alfresco should be fully migrated with all the files and settings from the old server. 8) Check Alfresco is working Once the database has been restored then alfresco should be fully migrated. Start up tomcat in the alfresco manager tool and try and login into alfresco over your internet browser. If you manage to login then check your files are there and that you are able to access them. If you can then congratulations! you have migrated alfresco successfully. If you can’t login or your files aren’t there check the alfresco log or your postgreSQL database to see any errors or missing data.

09 Feb 2015

Intel Thunderbolt

Hello, today I will be talking to you about Intel’s Thunderbolt technology. Thunderbolt technology is a dual protocol I/O that utilizes PCI Express and DisplayPort over a cable that can transmit data and video bi-directionally. The cable has a bandwidth on 10Gbps and is used with the thunderbolt controller. The controller controls the processing, sending and receiving of data. In a device that is daisy chained to other devices the controller passes data packets up and down stream. The cable itself is made from copper, although it was originally supposed to be fiber optic, due to cheaper cost. In the future it is planned to release a fiber optic cable Your device can be daisy chained, using the thunderbolt cable, between 7 devices (6 thunderbolt devices and 1 DisplayPort Monitor) and can utilize each devices hardware (as long as they have a thunderbolt controller). This means that if you have a cheap, not very powerful laptop, with a thunderbolt controller built inside, then you can link it up to a monitor and 6 other devices and, theoretically speaking, use their memory, graphics card etc. and turn your cheap laptop into a relatively powerful computer. Considering that apple are the main users of thunderbolt, with it being used in their MacBook Pro, and with technology shifting away from PC’s and more towards tablets (this is shown with Microsoft’s Windows 8 and the unveiling of the surface tablet) I can imagine this technology being used to complement the technology inside a tablet. When unplugged, your tablet can be a portable device that you can use for basic tasks (like browsing the web, reading emails, listening to music etc.). But when you get it all plugged in then you can use your tablet to run the latest games or run applications that are too demanding for the technology your tablet fields. ASUS have recently released a new motherboard, the P8Z77-V Premium, which has thunderbolt integrated into it. This is an option to take, if you want to use thunderbolt, without having to buy a Mac. However, I would recommend waiting a bit until you decide to invest in Thunderbolt. Current purchase of a thunderbolt cable is priced around £40 and the technology itself is rather buggy so some time should be allowed for the bugs to be ironed out and the prices to drop.

09 Feb 2015

Cloud Security

Cloud Security can be complex to understand. The best way to think about it is as you would about a safe. There are varying different products which range from something that will open if you drop it, up to a bank vault. There are many different types of cloud services. For example these range from consumer services such as Google Drive/Apps and Dropbox, to business solutions such as hosted desktops and infrastructure as a service. A large risk companies have at the moment is if staff starts to use the consumer services for businesses purposes as the company will then lose all control over the data held in these services and it will become a huge security risk. This is normally caused by companies failing to keep up with the new technologies available which allow members of staff to work in the most efficient way. In order to discuss Cloud security I will talk about the general areas which affect cloud services and some ways to mitigate the risk. The Wikipedia article on Cloud security (http://en.wikipedia.org/wiki/Cloud_computing_security) breaks down the risks of Cloud Security into the following sections which I will discuss. Identity management: The first issue that arises from cloud services is Identity management. This will normally entail how usernames and passwords are controlled. There are many options on how to implement this between your organization and a cloud service. A good cloud provider should work with you on implementing how this will work. However they should only ever implement something as secure, if not more secure than what is currently being used. A simple thing to look out for will be what their password requirements are. If they allow you to have password of password1 it is unlikely the rest of their system will be very secure. Physical and personnel security: The next issue of security is how secure their hardware is where your data will be stored. The best way to find this out is to ask for a visit to see the provider’s system where your data will be held. You should look for things like a secure fence around the building. Expect to have to go through security checkpoints provide ID and be escorted around the building at all times. Ask yourself the question how easy would it be for someone to break in and access the systems if they had malicious intent. On top of this the people who have access to the systems should be limited and documented so it is known who has access and when they have. It is also important to know where your data is being held to be compliant with things such as the data protection act. There are many companies which will store your data all over the world in order to reduce costs. Availability: Any cloud provider should be able to guarantee a certain level of availability to your systems in a secure way. The best way to ensure this is to make sure you have an approved SLA (Service Level Agreement) in your contract with the provider and ensure that if it is not met there are penalties for the provider. A service level agreement is a document used to define the level of uptime, how long support requests should take to be answered/completed and various other things which define the quality of service expected. Application security: Application security is very important in a cloud environment. A cloud provider should work with you on rolling out any software that is required to your environment. This should go through testing and be approved by the provider before being rolled out. A cloud provider should work with you on application deployment and should in some situations tell you no this application cannot be deployed. Of course they should then work with you on finding an alternative piece of software which is secure. Privacy: This involves how access to your private data is controlled. This can be credit card details or passwords. The best way I have found to test this is to say you have forgotten your password. If they are able to tell you what your password is then their system is not secure. A forgotten password should always be reset to a new one. This is because there should never be a way to find out what someone’s password is. Passwords should always be stored with one way encryption so should not be able to be found out. Another warning sign is if they ask you for your password when providing support. A good provider will never ask for your password, but reset it to something while they need access and get you to change it as soon as they are finished. Business continuity and data recovery: This is the process which a cloud provider should have in place in the event that a disaster happens. The process should be documented with times until the system is back up and running documented. It should be tested regularly, a minimum of once every 6 months or after any change to the infrastructure. Ask to see a provider’s disaster recovery plan and when it was last tested. Logs and audit trails: The final thing to check is to ensure there is adequate logging and audit trails of access are kept for as long as needed and secured properly. A cloud provider should work with you to define these. There are many different aspects of cloud security but hopefully this has given you some tips of what to look out for.

Hello, today i will be showing you how to recover a deleted office 365 mailbox using powershell. From time to time mistakes happen and, at least with Microsoft technology, there are ways of recovering data that has been deleted. Now, usually with Office 365 it’s a simple case of going to the Exchange server, viewing the deleted mailbox’s and clicking the recover button. However, if you are unlucky this method won’t work and you’ll have to fall back on to using powershell. If you have used Powershell before then you can skip to step 4, if not then start from the top: 1) Check the Execution policy The first thing you need to do is to check what execution policy you are using. The execution policy you use determines what scripts you can use. to do this you need to use the get command, like so: Get-ExecutionPolicy This will then display the your current execution policy. If it is set to strict then you’ll need to change the policy to remotesigned as you won’t be able to run any scripts. by setting it to remotesigned you can run any downloaded script that has been signed by a trusted publisher. To change the execution policy to need to type in the following: Set-ExecutionPolicy RemoteSigned You will be asked to confirm that you want to change the execution policy, just press “Y” and it should be changed. Once you’ve done that then you can move on to the next step. 2) Get Credentials This step is so that you can access your office 365 account that you are the administrator of. To do this you need to type in the following code: = Get-Credential A pop up box will appear asking for your email address and password. Type in the the email address that you use to access the office 365 of which you are an administrator. Note: it is important that you make sure you type in the correct information. If you don’t type it in correctly then when you do the next step you will get back an error and will have do do it all over again 3) Set up your session and importing it the next thing you need to do is configure your session. Essentially what this step is doing is connecting to the exchange server. to do this you need to type in the following code: = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell/ -Credential -Authentication Basic -AllowRedirection Once you’ve entered that powershell will attempt to go and check the connection to your exchange server. Once that is done you can move on to importing the session. To import your session you need to type in the following: Import-PSSession Once that is done powershell will go about importing your session so that you are fully connected up to the exchange server. Once it has finished importing you can move on to the next step. 4) Recovering the mailbox This is the script that you’ve probably been waiting for. This script will create a new account, find the deleted account and then recover all the data to the new account. To do this you need to type in the following code: New-Mailbox -Name “John Contoso” -RemovedMailbox “John Contoso” -MicrosoftOnlineServicesID [email protected] -Password (ConvertTo-SecureString -String ‘Pa$’ -AsPlainText -Force) Just replace the “John Contoso” with the details of the user mailbox that you are trying to recover. Once all the correct details are filled in press enter and get a message saying that it is trying to recover the mailbox. Note: it can take up to 8 hours for the mailbox to be recovered so it’s best to just leave it recovering for the day. In this time you won’t be able to access Outlook with the email address but you will be able to see it in the users section of Office 365. Just log in as the users every so often and check that you can get access to Outlook.