Hello everyone. Today I will be talking you through how to fix the search function for Alfresco 4.0d. Out of the box, Alfresco’s search function runs on solr. However, solr doesn’t work properly on Alfresco 4.0d so when you try to search a document Alfresco will come back with no results. To fix this you will need to switch from solr to Lucene which will require a bit of script altering. To do this, follow these steps:

  1. Stop Alfresco
    Before we start editing alfresco’s we first need to turn it off. This will then allow you to edit the files required to setup Lucene. To do this go to your alfresco server and start up the alfresco manager tool. Once in go to the “Manage Application” tab and click the “Stop All” button. This will shutdown alfresco completely.
  2. Edit the file Alfresco-global.properties
    Once alfresco has been stop you’ll need to edit the alfresco-global.properties file. This will be located at: alfresco\tomcat\shared\classes\alfresco-global.properties. This file is responsible for all the base setting for alfresco, including the database. The segment of code that you are looking for is:

    ### Solr indexing ###
    index.subsystem.name=solr
    dir.keystore=/keystore
    solr.port.ssl=8444

    You will need the edit some of this to setup Lucene. You’ll need to replace solr with lucene in the line index.subsystem.name=solr, hash tag out the lines dir.keystore=$(dir.root)/keystroke and solr.port.ssl=8444. You will also need to add in index.recovery.mode=FULL. This will re-build the index so that lucene will work. This is what you’ll need to change it to:

    ### Solr/lucene indexing ###
    index.subsystem.name=lucene
    index.recovery.mode=FULL
    #dir.keystore=/keystore
    #solr.port.ssl=8443

    Once that is done, save the file and start up alfresco so that the index can be re-built.

  3. Stop Alfresco, edit alfresco-global and delete some folders and files
    Once the index has been re-built you’ll need to stop alfresco to do more editing. First things first, get alfresco-global.properties open and go to the section we edited in the last step. You’ll need to change the index.recovery.mode to the following:

    index.recovery.mode=AUTO

    Once that is done you need to delete the following folders and files:

    • Alfresco\ald_data\solr
    • Alfresco\tomcat\conf\Catalina\localhost\solr.xml
    • Alfresco\tomcat\webapps\solr

    Once those folders and file have been deleted you can start up Alfresco again with Lucene in place of solr. The search function should now work.
    Thank you for reading.

Hello everyone. Today I will be talking about how to fix word toolbar options not being saved. This fix can also be applied to toolbar options that are permanently grayed out as well. Word is a very easy tool to personalize. The options tab gives you the ability to change the toolbar to the way you want it. However, sometimes Word won’t save your option changes. This can be caused by the toolbar cache being corrupted.

To fix this issue you need to go onto the Registry Editor. Registry editor tool that gives you access to all the registry caches. To access it you need to go to start > Run and then type in “regedit”. This will bring up the registry edit tool. Using this you’ll need to Software > Microsoft > Office > 14.0 (this depends on what version of office you have. i.e., you could be using 8.0) > Word > Data.

Once you are in the data folder you need to look for a cache named “Toolbars”. Once located you can either delete the cache or (if you want to play it safe) rename to cache to “toolbar_old”. This way you can revert to the cache should you need too. If you really want to play safe then you can also backup the original cache.

Once that is done close down RegEdit and restart word. You should now be able to save your options. Test this by changing some options, restarting word and then check if the options remain. Then try restarting your PC and checking the options remain the same after that. If they do then it should be working.

Thanks for reading!

Hello everyone. Today I will be talking about how to turn off Add-Ins in Microsoft Office. Add-ins are software that is added into Microsoft Office applications to, usually, allow them to work in conjunction with other applications. An example of this would be a database application that contains customer details and has letter templates. When you click on the letter template the template will open up as a document in Word. This is due to the add-in that would have been automatically installed on Office when the application has been installed. This allows for better integration amongst numerous applications.

However, sometimes these add-ins from 3rd party software can cause issue’s with office. On occasion I have seen them have effects on Office applications that cause Office to either not work the way it should or not work at all. in these instances, you should disable the add-ins one at a time to determine if any of them are causing the issues. To do this you need to:

  1. In the affected Office application, go to File and then click on Options. This located on the left hand side between Help and Exit
  2. Once in the options control panel, click on the add-In’s tab. This is located on the lower left hand side of the control panel
  3. The Add-in’s Panel will display all the Add-ins that are active, inactive, document related and disabled. At the bottom is a drop down list of Add-ins to manage. Choose “COM Add-Ins” and then click “Go…”.
  4. You should now has a list of all the COM add-ins. If they have a tick next to them then they are active. If they don’t then they are inactive. To deactivate the ad-in just un-tick the box next to a particular add-ins and press ok. You can also remove or add add-ins in this panel.

If you can’t even do that (I have had an instance when an add-in had not allowed access to file tab to I couldn’t access the options) then you should start up the Office application in safe mode. To do this click on start and type in the search bar the name of the application followed by “/safe”. This will start up the application with all add-ins disabled and will allow to try try and (hopefully) find what is wrong with Office and fix it.

I hop ethis has helped you. Thanks for reading!

Hello, today I will taking you through the steps of how to setup a POP and IMAP email account on Outlook. When it comes to manually setting up an email account, you have 2 options (depending on where your exchange is). The first is to point outlook to a Microsoft exchange server or an equivalent. You would use this if you run your email off Office 365 or an exchange server provider. The other would be to point outlook to a SMTP and POP server. You would use this is you have your own email servers setup with POP and SMTP. Before you setup your POP email address you will need your email address, password and the Domain Name’s of the POP and SMTP servers.

  1. To add the email account go to Outlook and click File > Info >Add account. This will start up the account wizard immediately. Alternatively you can add an email address by going to start > control Panel > mail. Once you’re on Mail you can wither click on Email accounts which will take you to the account setting of default profile or, if you wish to add the email account into a different profile, click on “Show profiles” and then select to profile in which you wish to add the email account into. This will then take you in Account Setting panel as well. Once in the Accounts Settings panel click on the “New..” button in the top left hand corner to start the “Add New Account” wizard.
  2. Once you’re on the “Add new Account Wizard” click “Manually configure server settings or additional server types” and click next. Then choose “Internet Email” to and click next.
  3. Once in the “Internet Email Settings” panel you can start filling in the information. In the “server Information” section, choose whether the server is uses POP3 or IMAP and then enter the server Domain name. This would likely be along the lines on “mail.example.com”. Fill in your server login in details and click “Test Account Settings..” to make sure you have a connection to the server and that you can login into it.
  4. Once you’ve established a connection with the server click “More Settings”. In the General tab fill in the reply email section. This is usually the same as your email address. If you are using an internet service providers SMTP server go to the “Outgoing Server” and tick “My outgoing server (SMTP) requires authentication”. Click on the advanced tab to check the ports are set to 110 and 25. Once done click OK and then next. Your POP email address should now be setup.

Thank you for reading.

Hello everyone, today i will be discussing Bit coins. Some of you may already know of Bit Coins but the large majority will never have heard of it. So I will be explaining to you what they are, how you obtain them and what you can use them for. What are Bitcoins? Bit Coins are an online currency (also know as e-cash) that was created in January 2009 by Satoshi Nakamoto. Satoshi is a cryptographer who came up with the Bit Coin Protocol. Bit coins are similar to that of local currency (such as that of town’s that have their own currency that you use in local shops) in the sense that the worth of the Bit Coin is decided by the online community. The worth of the Bit Coin will fluctuate depending on the amount and size of transactions though, unlike national currencies, the value of the bit coin can quickly recover. At the time of typing this, 1 Bitcoin (BTC) is equal to £8.239GBP or $13.32USD. Bit Coins come in 3 varieties: You have the Bitcoin (1), the Bitcent (0.01) and the Satoshi (0.00000001). All of these can be earned and traded over the internet like normal currency. These will be kept in your Bitcoin wallet on your computer. These wallets have their own, unique address (like a bank account number) that will be used to make transactions. The worth of the Bit Coin will fluctuate depending on the amount and size of transactions though, unlike national currencies, the value of the bit coin can quickly recover. How does it Work? Like all currencies, Bitcoin relies on the communities trust in the exchange for it to have value. By the community having trust in Bitcoin people will use, buy and sell Bitcoin. if no-one has trust in Bitcoin then no-one will use it. Unlike normal currencies, Bitcoin doesn’t have a centralized body that issues Bitcoins. Instead, Bitcoins are rewarded to “Miners” whom then sell their Bitcoins onto the community. The miners themselves are individual’s that are all connected to the Bitcoin network who either work by themselves or in groups (this will be discussed later). The number of bitcoins that can be “mined” is set at 21 million (at time of writing there are currently 11 million Bitcoins in circulation). Once 21 million Bitcoins have been mined no more will be created. This is outlined by the protocol that Satoshi came up with. By people abiding by the protocol and trusting in the currency is what gives Bitcoins their worth and makes the system work. In order to make counterfeiting impossible, all the transactions are stored on an online database called the block chain. The block chain is formed of blocks. Blocks are list of all the transactions that took place in the space of 10 minutes. Once that 10 minutes is up the block is added to the block chain and a new block is started. Once 6 blocks are put on top of each other the block chain the consensus solidifies so it becomes impractical to alter the transactions for your own gain. These blocks must meet constraints, dictated by the network, before they are added to the block chain. This mean that it is very hard for a person to cheat the system. How do you Obtain BitCoin’s? You can obtain BitCoins in 3 ways: exchanging for actual currency, trading and mining. Exchanging for actual money is used to get you started in the Bitcoin exchange. Once you have some bitcoins you can start trading with others in the community such in the same way city traders do. You can buy and sell Bitcoins and then exchange them for real currencies. If you own a business you can join the 1000 merchants signed up to Bitcoins and start trading your good’s for Bitcoins. The main method for obtaining Bitcoins is Mining. This is the process of finding a solution to a difficult Proof-Of-Work problem which confirms transactions and prevents double spending. This would be done by a node (a graphics card) on a dedicated server (it has to be dedicated as it requires a lot of electricity). These transactions are put into files called blocks every 10 minutes and then added to the block chain (an online public database that has contains a list of all the transactions). The block chain is only allowed to accept 1 block per 10 minutes. This block must meet stringent constraints dictated by the network. If the block hasn’t formed into the form it should then the block chain will reject it. The one block that gets accepted onto the Block chain will get it’s owner 50 Bitcoins as a reward for the work they’ve done. The downside to mining is the time, effort and resources it takes to get the reward. Mining uses a lot of electricity, data and takes place on Graphics Cards so it can be impractical cost money for the individual miner. For this reason, some miners pool their resources together and then split the reward they get depending on the work they’ve done. The only downside to this is you have to give up some of the reward you get but it can be beneficial in the long run. I hope this has given you a better understanding into the world of Bitcoins. Thank you for reading!

Hello there. Today I will be discussing why small to medium sized businesses should invest in a fully integrated cloud infrastructure over a traditional IT model. Cloud technology has been around for a couple of years now and (with all things technical) businesses have been slow to adopt it. Most big businesses have taken the step to the cloud yet small to medium sized businesses are still choosing to take up the traditional IT option (physical desktops, on site storage etc.). So below I’ve made a list of reasons why small businesses should move to the cloud.

  1. Flexibility Flexibility is a very important aspect to small and medium sized businesses. Being flexible allows them to manoeuvre and adapt to the ever changing business and economic climate we live in. So, how come they don’t apply that to their IT? The traditional business model for IT is only as flexible if you are willing to pay a lot of money. For example: as an owner of a business, you need a server to do specific tasks. However, to make the most of that server you will need to think 5 years ahead to decide on its requirements. So, to make it flexible in the future, you have it run on up to date hardware, the latest software and a lot of disk space for expansion. By doing this, you now have a high end server that should be able to meet future requirements. However, to get that ability you are now down by a large sum of money. Also, given that technology advances at a fast rate, you’ll find that (after a couple of years) your server will struggle to meet the demand of the tasks you give it. You’ll then have to pay more money to either improve performance or upgrade to a new server. In comparison, using a cloud server will already give you that flexibility from the word go. A cloud server can be changed and adapted to your needs in a matter of minutes. If you need more disk space more can be added in a few clicks. Software can be added in a couple of clicks. This gives you a degree of flexibility that you wouldn’t be able to achieve with a traditional IT model. By taking the cloud route, you can adapt your IT to whatever your requirements are in a short amount of time and for a low cost. Flexibility can also be used in terms of maintaining your IT systems. A problem with a desktop will take longer to fix than a cloud desktop as a technician would have either have to try and fix the problem remotely or come on site to fix the problem. If the technician would have to come on site then the down time is a lot longer as you’d have to wait for the technician to arrive. By having your IT systems in the cloud the technician would be able to take control of your computer and have the issue fixed in a shorter period of time.
  2. Security Your data is the most important asset to your company. Should your data become compromised or destroyed then your company will struggle to recover it. With the traditional IT method, you’ll setup up your computer and servers to backup regularly, encrypt any data you store and have anti-virus installed. These measures should protect you against most threats such as viruses and unforeseen technical malfunctions. However, will it protect it from a major disaster? Should your company experience a major disaster (such s a flood or a fire) then you’re looking at major downtime in your business and lost or unrecoverable data. This, potentially, could be an event that your business won’t be able to recover from. By using utilising the cloud you can avoid these threats. Having all your data stored on the cloud takes away the risk of localized threats. For example: when using a cloud desktop you connect to the desktop over the internet. All the work that is done on that cloud desktop stays in the cloud. Should your PC break then all you have to do is go to another computer (that has an internet connection) and access your cloud desktop from there. All the work you were working on will be left the way it was when your PC broke. So should your whole office go down none of the work that you or your employees were working on will be lost. The same principle goes for data storage. With your servers in the cloud you will be able to easily access your data once you have a site setup.
  3. Cost There are two main subjects when to the cost of IT: the cost to run those IT systems (electrical usage) and the cost to maintain those systems. With the traditional IT model the cost is far higher. Even with the advancement of technology your standard desktop still uses a lot of electricity. The same goes to your on site servers, which you should keep running 24/7. The cost to run these IT systems will increase each year as electricity prices go up and your PC goes out of date (in terms of technology). Over those years your PC’s and server’s will also degrade. The standard life span of a desktop is 3 years and in those 3 years the cost of maintenance will go up as your desktop struggles to cope with the tasks it’s given. This changes when you move to the cloud. A number of places offer thin clients to access your cloud desktop to access your cloud desktop. These are, essentially, compact desktops that run on minimum hardware. The hardware itself is barely used as all the work is done on the cloud desktop itself. By using these you reduce electricity costs as far less electricity is being used. Maintenance costs also go down because a thin client has far less to go wrong with it. Since the hardware is barely being used the thin client degrades at a far slower rate, meaning it lasts far longer.

I hope you have found this blog article interesting. Thank you for reading.

Today I have compiled a list of 5 technologies to look forward to this year. I believe the item’s on this list will be game changing in their technological area: 5) Synaptics new range of touch interfaces Synaptic’s are touch pad manufactures who have been around since 1995. If you’ve had any device that has a touch device, the chances are you’ve used one of their products. This year they will be releasing a new range of touch products that will offer more than the current touch products around at the moment. These are: ForcePad, ThinTouch and the ClearPad. The ForcePad is their latest Touch Pad device. This uses pressure tracking sensors instead of the traditional mechanical switches that are in current use. What it does is measure the pressure that your finger is asserting on the pad. Light movement will move the cursor around on the screen and applying pressure will select. This is similar to standard touch pads. However, the pressure tracking sensors allow for extra features as it allows the tracking of multiple fingers. For example: When you use your current touch pad, to right click an icon you have to you the right click button on the bottom of the touch pad. What they have done is do away with the buttons at the bottom and made it so that taping the pad with two fingers is right click. The extra functions are supposed to work in conjunction with the new Windows 8 operating system (which is designed for touch input first). The ForePad allows the users to use all of the 8 touch commands that come with Windows 8 and has the technology to expand in the future. The next in their products is the ThinTouch. The ThinTouch is synaptics new range of keyboards that were originally designed for the ultrabooks and thin notebooks. The main difference is the thickness of the keys. Modern keys use a scissor technology. These range from 6mm to 3.5mm. In comparison, the ThinTouch is 2.5mm at its highest point. This allows for very thin laptops or larger batteries for the laptop. But, this isn’t the coolest thing about the ThinTouch. The whole keyboard is equipped with capactive touch sensors which will allow you to do gestures with the keyboard along with the touch pad. Since each key has a capactive touch sensor installed there is an electronic field over the surface of the keyboard. This would allow for near field gestures to be done as well (waving your fingers over the keyboard rather than touching the keys.) The last of their products is the ClearPad. This is designed for smart phones, tablets and notebooks in mind with up to 17” displays. This uses a single chip (a combinations of display controller and touch controller) to do the work. This reduces the energy consumed, the cost of the ship and reduces the latency as well (latency means the time between command and response). This allows for much quicker response times to your touch commands, increases the battery life of the device and brings down its cost. Overall the reason I’m looking forward to these devices coming out is because they will change the way we interact with the devices we use. It will allow for much more fluid control over the touch device and improve the PC human interface. 4) The rise of OLED When is comes to display technology there are 4 technologies used: Liquid Crystal Display (LCD), Plasma, Cathode Ray Tube (still around but not in main stream use) and Organic Light-Emitting Diode (OLED). The last one in that list is the latest in display technology. It has been around for while and is utilized in small screen devices such as the PSP Vita and the range of Samsung smart phones. However, LG have very recently released their new 55” TV that uses OLED. The great thing about OLED it that it is much for energy efficient compared to LCD and plasma. It also allows for much thinner displays compared to LCD and plasma. Combining both of these qualities means it’s great for portable devices as it would offer greater battery life (or in the case of static displays less electricity being used which is great when we are all going green) and will either make the device thinner or offer more space inside the device to be utilized by other systems. The most interesting thing about OLED is that a number of people believe that OLED could herald flexible, “bendy” devices. The reason for this is that OLED doesn’t require a glass screen. This means that devices that are made from OLED will be lighter, more durable and can be folded away or even wearable. Samsung are already in the final stages of making their flexible phone (made from OLED) and should be released in the first couple of months of 2013 3) Google Glasses Google glasses are Google’s latest in mobile devices which run off the android operating system. It is a pair of glasses which displays the data on the inside of the glasses as a HUD (Heads Up Display). It is controlled by voice command and can be used on the move. Even though it isn’t the first pairs of HUD glasses it will probably be an innovation in its own right (such as the iPad not being the first tablet but it did kick start tablet technology). What the glasses allow you to do is to have a constant stream of data whilst you are on the move. So unlike smart phones (which you have to hold in your hand and look at) all the information will all ready be there in front of yours eyes. There are also rumours that Google are considering adding a phone capability to the glasses. With this capability it could completely change the playing field when it comes to portable devices. 2) Leap Motion Leap motion is a small box that sits in front of your keyboard. What it does is add a completely new dimension to controlling your computer. Traditionally you’d use a keyboard and either a touchpad, touch screen or mouse to select items. What leap motion does is it allows you to control the computer by waving your hand over it. It senses the gestures your hand makes whilst over the device and then translates and inputs it into the computer. To get a better understanding I highly recommend you watch the video on their website: https://leapmotion.com/ 1) Oculus Rift As a fan of video games I’m very much looking forward to the Oculus Rift. What most gamers want and enjoy about gaming is immersion, the feeling that they are actually inside and are apart of the game they are playing. This is mostly achieved by getting the biggest screens possible (or in the case of a number of avid computer gamers having multiple screens), having a good surround sound system and, if you are very serious about your gaming, a head tracker (this tracks to movements of your head and then uses that in your game). The only downside to all of this is that it costs a lot of money and is limited on the feeling of immersion in the sense what you see on the screen is limited in the field of view. What Oculus Rift does is take immersion to the next level. It’s a pair of goggles that you wear on your head. It will then display the game inside your goggles which will gives you a peripheral vision and a sense of depth, none of which can be achieved by a normal screen. It links up to your computer using a DMI connector and there are plans in the future for it to be used on consoles as well. They have a video on their website that demonstrates the Oculus Rift: http://www.oculusvr.com/ I hope you have found this blog interesting and will look forward to these new technologies with the same anticipation as I do. Thank you for reading.

Hello everyone, today I will be typing up 3 ways cloud technology can benefit accountants.

  1. Can be used remotely As an accountant you’ll probably be required to move around a bit, either going to different customers or working from home. Doing this means that you’ll have to take all the documents you need and the applications to do your work on with you. That would mean having a laptop that is capable of running all the required applications on it, applications installed on it to do the work and plenty of space to store the data. This will cost a lot of money to get a decent laptop to do this work and cost more for the licenses for the software. This would be ideal for a Virtual Desktop Infrastructure (VDI). A VDI is a virtual computer that runs on a server on the internet. You can access this from anywhere as long as you have a device with an internet connection. The VDI will have all the applications you would require to work and access to all the data you need (such as worker shared drives). This can be accessed on a cheap laptop that has WiFi capabilities. The VDI would be what you would do your work on, essentially becoming your office desktop. All that work can be easily accessed from you clients site or from home.
  2. Can be used effectively in the office In an office that utilizes the traditional IT model (i.e. desktop computers, servers e.t.c) a lot of money is spent on running and maintaining the IT systems. Everyone would require a PC and you would also require multiple servers to run services (in a large office this can cost a lot on the electricity bill). Your IT support costs would also be high. Desktops have a life span of 3 years, meaning that plenty of money will need to be spent to replace them every couple of years an even more to roll out new applications a software to the desktops. If you have multiple branches then you will also require VPN’s to be setup so that all the works are connected to each other and have access to the data they need. Adoption of cloud technology can change this. Instead of using traditional desktops you would use a ThinClient (an 8th of the size of a standard PC). This would be used to access your VDI. The great thing about the ThinClient is that it has far less parts inside it than a desktop. With few parts its uses less electricity and its life span increases to 5 years. This takes a good chunk out of the cost for the electricity bill and maintained costs. All servers can be moved to the cloud as virtual servers. This would free up space and take away the cost of running and maintaining server rooms. With all your works running on VDI’s they would be able to access the same data. The VDI’s would be setup on the same network. This would mean that works all around the country would have access to the same network drives. The centralization of the VDI’s also makes it easier for IT support to access the VDI’s and fix them. An IT Technician can monitor, control and remotely access the VDI’s in seconds. From a central management console they can roll out updates and applications in minutes, compared to a technician going round each PC in the office and installing software on each individual computer. By doing this, it means that down time of the office computer is greatly reduced. Meaning less money is lost to maintenance
  3. Securely store data in an easy to access place Accountants have to handle their client’s most sensitive data. With the rise of cyber crime this data should be made very secure because if the data fell into the wrong hands it could cripple the company. However, securing it can make it harder for those who need to access it. Having it on the cloud can tailor for both worlds. Cloud servers are typically stored with the IT technicians; giving them direct access to the server should anything go wrong. The data would be backed up and secured on encrypted servers at the data centre (protecting it from cyber criminals and other hazards) but at the same time would allow the people who need access to it easy access. You would be able to access the data remotely as long as you know the passwords and have the permissions to view the files. This allows the accountant to do their work whilst keeping out the unwanted.
  4. Thank you for reading.

Today I will be discussing how schools, colleges and universities can benefit from cloud technology. Some of these places are the size of large companies. Making use of an efficient, cost effective and integrated IT system can benefit schools, students and staff immensely. Cloud does that all so I’m going to explain how moving to the cloud can benefit education systems.

  1. Access to all software that a student needs for their course In secondary school upwards students become more varied in the courses they do (the same different student need different books for their courses). With these courses comes the technology they need to use to achieve this (i.e. a computing student will require Visual studio, an engineering student CAD etc.). With the current systems in place it will be very difficult to achieve this. Most large schools would have an Active Directory system in place. This would allow the students to login to any of the computers on the school domain but they won’t be able to access the software unless it’s installed on that computer. With a VDI they would still be able to login to any of the computers but also do work (with the software required to do it) outside of class.
  2. Quicker and easier to unroll the latest technology Keeping up to date is an important thing in education. Any new technology that comes out and is shown to greatly improve the quality of education would be installed on the school system as soon as possible. The only problem with this is that it will take a while for the onsite technicians to go and install this software on each of the computers. With cloud desktops you will be able to roll out the new technology within minutes to all of the students virtual desktops’. The same could apply with specialist software (i.e. something that only students of a specific course need access to). Normally you would have to install that on the computers in the classrooms being used by that course. With a Cloud Desktop you can quickly toll it out to the students that need it. They will then be able to access the software from any desktop via their cloud desktop. By doing this it will allow the school to remain up to date with the latest technology and could potentially improve the learning standards of the students.
  3. Students can access all their work from home or on their own devices at school More and more students are bringing their own devices in to schools, colleges and universities. Depending on the schools stance BYOD (Bring Your Own Device) will either be discouraged or out right banned. However, having cloud desktops could work in the advantage of the student when combined with BYOD. Cloud desktops can be setup to run off any device that has access to the internet (i.e. laptops, tablets, even smart phones). By doing this the student can use their Cloud desktop to type notes in class/lectures and can be used to help them in research tasks. By doing so the student would be able to utilise their own device to actually benefit their studies rather than it being a distraction. Adding to the point that Cloud desktops can be accessed from any device with internet access, in recent times students are having to do more work outside of school (either homework or coursework) or can’t get to school due to turbulent weather (i.e. heavy snowfall). Sometimes the homework or coursework will require the student to have access to applications they do not have. They would also have to bring that work in to school (either by email or USB) which would put the school IT systems at risk from viruses. Using a Cloud desktop means the student will have access to all the applications and files they need whilst doing it on a protected system.
  4. All work saved and stored in one easy to access place In regards to studies it is very important for data to easily be accessed by teachers and students from a central point. This, in a number of schools, is achieved by using an intranet which can be logged into via the school website. From there the teachers can upload work and students can download it. However, the problem with this is that, since teachers and students are using their own computers, it can create a large range in applications used by individuals. This means that not everyone will be able to read what the teacher has uploaded as their PC either has a different version of the application or it does not have it at all. With a VDI all work is saved and stored there. The students can be setup with shared drives (depending on the course they are doing) from which their teachers can save work, revision notes etc. from them to view. Given that they are doing the work on VDI’s they would all have the same applications and software to do the work, so there would be no capability gap between students and teachers.

I hope you have found this interesting to read. Thank you for reading.

Hello there, my name is Ben and today I will talking you through how to set a user’s permissions on office 365 using powershell. I shall be talking you through the steps to configure the permissions of an office 365 user to view another users mailbox.

To configure a user to view another users mailbox, follow these steps:

1) This step will show you what your execution policy is on your computer. Depending on what execution policy you have will determine what scripts you can run. Theresfore, it is important that you do this step before starting so that you are sure you can actually execute the scripts you’re about to use.

get-executionpolicy

2) This step verifies that you are an administrator and will get you all the users that the administrator (that you are logging in as) is in control of, which will allow you to change their settings.

= Get-Credential

3) This step will set the so that when you import it you will get all the scripts required from the outlook exchange server.

= New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell/ -Credential -Authentication Basic -AllowRedirection

4) This step imports and the scripts that you are about to use.

Import-PSSession

5) This step will set the permissions. "-identity" will be the persons mailbox that you want the other user to view and "-user" will be the user who you are setting the permissions. "-AccessRights" will be what the other user can view. Giving them FullAccess (like what has been done in this script) will give the other user full access to the mailbox. Remember to change the and with the respective users email addresses.

Add-MailboxPermission -Identity -User -AccessRights FullAccess -InheritanceType All

6) This step is optional and not required, however it will confirm whether the permissions you've just added have been implemented.

Get-MailBoxPermission

Thank you for reading this blog. I hope it has helped you with allowing an office 365 user to view anothers mailbox.