What is important to grow in career ladder Knowledge Experience or skill?

 


As a seasoned employer, recruiter when someone initiates a candidate search to solve the problems/issues which is phrased as job offer the primary thing that will be looked upon in your curriculum vitae can be knowledge plus experience until upto skills.

Lets first see if all these three terms mean the same?

They look alike but are not the same. If you carefully read it again and again, you can figure out that

Knowledge - Getting exposed to technology, workmanship

Experience - Second stage wherein you start applying the knowledge

SKILL - You build your experience on a single thing and become skilled in that portfolio

What level are you at?

This post is primarily aimed at helping candidates who will be prospective job seekers, someone who has career block and doesn't know what is the exact trigger to help them propel further.

Say, you are in college pre-final year. You start preparing for your upcoming campus interview. Incase if you don't have campus placements, you are still preparing to find your new job.

Now-a-days employer prefer an element that will make you class apart. Here is a simple blueprint you can chart for yourself to move ahead

  1. Choose what you want to do as your job - Many people end up choosing their career they are not interested in. Main reason being they don't have proper planning, preparation. Before entering real-time job profile, try to analyse and come up with conclusion on what would be your best career. This can come in form of simple analysis of job profiles, attending in-plant training, internships if you find one etc
  2. Work in it each and every day - This is a simple thing that each and everyone of us can do. Spending at-least 10 mins a day reading lessons on what we  want to be, acquiring knowledge beforehand will help us move ahead. Parents always insist on reading daily. Success is a result of constant continuous hard-work focusing on one-thing, one real-thing. Make it a habit to read each and every day the single thing you want to be. This will help you gain KNOWLEDGE. This is the first step
  3. Showcase your BEST attitude working part-time - Say you find a part-time job that doesn't cost you anything, but still doesn't pay you anything, don't hesitate to take it up. The earlier you start building experience more ahead you will be in marketplace. OUT OF BOX TIP # Say, you don't find any job, create free account in advertising portals, post advertisements saying that you want part-time job. Publish your interest in social media as well. Create your candidate profile in job portals with little knowledge details and say that you want part-time jobs, internships on paid or unpaid basis. This will showcase your best attitude and will help you find your first job
  4. Don't waste weekends - College days are the best days to spend lots of time with your friends. What if you find like minded friends who aim at achieving BIG. Such friendships will help you find best beautiful career real soon. Work part-time by reading to improve your knowledge, searching for part-time jobs, working on part-time jobs, internships to improve your experience, taking up own projects out of self-interest etc. After all TIME is MONEY
  5. Become SKILLED - We have seen evolution from knowledge to experience. Now, you can be skilled if you become dependable in your team. Remember that skilled person will be respected as best leader.

Always, WORK HARD, STAY FOCUSED, ACHIEVE

SQL Server Interview Questions

1) Give details on SQL Server Enable Filestream At Database Level Using SQL Server Configuration Manager:-

SQL Server configuration manager the tool that comes as an integral part of SQL Server installation helps us enable filestream feature. All Programs->Microsoft SQL Server 2012->Configuration Tools->SQL Server Configuration Manager->Database (say SQL EXPRESS)

Right click and choose properties. Click on Filestream tab. Enable filestream, enable I/O access on windows share and remote clients if needed. Click Apply then ok

This will enable filestream access across the SQL Express (for our example) database.

This is similar to executing the following T-SQL query at SQL Server Management Studio ->File-> New Query

exec sp_configure filestream_access_level,2

reconfigure

go

2) How to add adventureworks to sql server database?

Many features in SQL Server 2012 database can be tested easily if we have a good dataset in place. As it is often not possible to simulate a dataset microsoft has come up with simple easy to use solution AdventureWorks2012 database.

This is a easy plug-in type database that can be easily attached to current SQL Server 2012 infrastructure.

The first step is to download AdventureWorks2012 datafile from microsoft website

http://msftdbprodsamples.codeplex.com/releases/view/55330

1) Launch SQL Server Management Studio 2012

2) Right-click on database and click Attach option

3) Add the Adventureworks.mdf file from DATA folder of MSSQLSERVER

4) Click and remove the missing logfile

5) Click okie and the database automatically gets added to database explorer in SQL Server Management Studio

6) Expand the AdventureWorks2012 and make sure that all tables exist

7) In New query prompt, type use AdventureWorks2012; go will take us to this database

SQL interview is a common thing among all the technology and non-technology professionals. Here are some questions to make your preparation easy 1) Is it possible to Create Table No Column :? Table is the basic storage object in oracle database. It is illogical to create a table without any column (as we create it just to store data). Still I was curious to know the output/error while creating table without any column. Here is the result SQL> create table test_nocolumn( ); create table test_nocolumn( ) * ERROR at line 1: ORA-00904: : invalid identifier Here is the output while creating table with no column in oracle database. 2) Give details on object system privilege in databases :- Privileges are permissions which restrict the level of access of each and every user using Oracle database. This makes database access safe and secure. Only users having permission will access the database. Also all the users can't perform all the tasks. This restriction makes database a safe space to work with.There are two major set of privileges : 1) Object Privilege 2) System privileges Object Privilege - SELECT,INSERT,UPDATE,DELETE Major DML,SELECT privileges against many different objects like tables,views,indexes,synonym in the database lets the user perform the actions with the privilege they have. A simple demonstration of this privilege is going to be SQL> grant select on test to info; - Grants select permission to user A SQL> grant delete on test to user B; - B can delete the rows in test. He will not be able to select the rows System Privilege - Create Table,Alter System,Create User, Create Session DDL operations including object creation happens with the CREATE system privilege. Also if we need to tweak operations at SYSTEM level we need Alter System privilege 3) What is the reason behind ORA-00947: not enough values? I created a table in Oracle database and tried inserting values into it. I tried inserting two columns in a table made of three column. This poped the following error SQL> insert into employee values(1,'info'); insert into employee values(1,'info') * ERROR at line 1: ORA-00947: not enough values SQL> desc employee; Name                                      Null?    Type ----------------------------------------- -------- ---------------------------- EMP_ID                                    NOT NULL NUMBER(38) EMPLOYEE_NAME                                      VARCHAR2(20) SALARY                                             NUMBER(38) SQL> insert into employee values(1,'info',10000); 1 row created. SQL> insert into employee values(1,'info',10000); What is the reason behind ORA-00907: missing right parenthesis? Oracle Database INT datatype can't have value specified. If we want to specify the length we can make use of number instead of INT. SQL> create table test (id int(10), phone number(10)); create table test (id int(10), phone number(10)) * ERROR at line 1: ORA-00907: missing right parenthesis The issue gets fixed if we rewrite the query as follows SQL> create table test (id number(10),phone number(10)); Table created. How to enable and disable triggers in oracle? Oracle triggers are PL/SQL procedures that are automatically fired by the database based on specified events.Triggers are used for performing audit and security related tasks.They can be fired before or after DML statements , system events(database startup, database shutdown, users logging in, users logging off). CREATE TRIGGER ... - This is the syntax When created triggers are enabled by default. We can temporarily disbale the triggers using the following command : SQL> ALTER TRIGGER trigger_name DISABLE; We can re-enable the trigger using the following command : SQL>ALTEr TRIGGER trigger_name ENABLE; Give details on union operator in sql :- SQL statements has a UNION clause support that lets us combine the information in two tables. they function the same way as mathematical set operator union. The output is going to be the combination of all the information in both the tables SQL> select * from t1; ID NAME ---------- ---------- 1 lr SQL> select * from t2; LOCATION        PHONE ---------- ---------- globe        45623456 SQL> select name from t1 union select location from t2; NAME ---------- globe lr T-SQL to Check if Filestream feature is enabled at database level SQL Server :- Filetables the latest feature in SQL Server 2012 makes use of filestream feature at database level. Before creating filetables it is mandatory to know if the filestream is enabled at database level. Here is the simple T-SQL that helps us determine the same if exists( select * from sys.database_files where type_desc='FILESTREAM') PRINT 'Filestream Configured For Datbase' else print 'Filestream Not configured for database' If it is not enabled, the t-SQL cna be run in SSMS new query window to enable filestream execute sp_configure filestream_access_level,2 reconfigure How will you shrink database in sql server environment? Shrink database is often an option to reclaim the unused space from the database. This de-fragmentation will help us shrink the database to its originally created maximum size We can perform the shrink operation using SQL Server management studio and using T-SQL statement The dbcc command is used to perform the shrink operation of a database Say, to shrink a database by name demo issue the following command dbcc shrinkdatabase(demo,30); - This will shrink data and log files in demo database and will let 30% of space free in these database How will you perform PERFORM SQL SERVER BACKUP ONTO NETWORK SHARE? SQL Server offers maintenance plans that automate the full, differential, transactional log backups by creating jobs and automating using schedule. As such there comes a situation to backup the SQL Server databases onto network share directly Formally, this is not supported by microsoft. However, this is possible and can be done under guidance of sql server expert with approval from microsoft support personnel. Here are the simple steps 1) Enable the xp_cmdshell. This will change the value from 0 to 1 and the settings are made permanent using reconfigure command sp_configure ‘xp_cmdshell’,1; Go RECONFIGURE WITH OVERRIDE; Go 2) Set up the network share as backup destination exec xp_cmdshell ‘net use X: networkshare-name’; VErify the existence of drive as follows: exec xp_cmdshell ‘Dir X:’ 3) In computer look for X: and make sure this is accessible via unc path 4) Create maintenance plans, specify x: as backup destination and test backups Adding Logfile to AdventureWorks2012 Database: It is evident that we can simply attach AdventureWorks2012.mdf datafile to a database and make it accessible. However, logfile is not naturally added as a part of this porcess. Adding logfile to AdventureWorks2012 database is simple and straight forward by using alter databas edatabasename add log file command alter database AdventureWorks2012 add log file ( name=Adventureworks2012log1, filename='C:SQLSERVERDBNAMEMSSQL11.DBNAMEMSSQLDATAAdventureWorks2012.ldf',size=1024KB) Create New Database with Filestream Enabled SQL Server 2012: Here is the simple T-SQL query that cna be run in SQL Server 2012 Management studio and helps us create a database with filestream enabled create database TestFSDatabase on primary ( name=myTEstFSDatabaseDB, filename='C:SQLSERVERDBNAMEMSSQL11.DBNAMEMSSQLDATATestFSDatabase.mdf'), filegroup TestFSDatabase contains filestream( name=TestFSDatabase, filename='C:SQLSERVERDBNAMEMSSQL11.DBNAMEMSSQLDATATestFSDatabase') LOG ON ( name=TestFSDatabaseLog, filename='C:SQLSERVERDBNAMEMSSQL11.DBNAMEMSSQLDATATestFSDatabase.log') GO

Project Manager Job Responsibilities

 A person vested with the responsibility of PM has to perform the following:

1. Set Objectives - This starts with negotiation, attending requirements gathering, requirement finalization
2. Establish plans - This is the design phase during which blueprint of the project is laid out
3. Organize resources - Allocation and management of Human resources (personnel), financial resources etc
4. Provide Staffing - Integral part of resource organization
5. Set up controls - Proper monitoring of resources, recording work progress, charting results, presenting results to clients, manage change requests including approval, rejection of changes etc
6. Issue Directives - Integral part of control
7. Motivate personnel - Organize personality development training program, approval of technology training
8. Apply innovation for alternative actions
9. Remain flexible - Open to discussion with peers, willing to put in extra effort to accomplish the tasks

Checklist needed while launching a new business

As an entrepreneur aiming to launch new business you can create a checklist based on this idea to launch your next business. This includes both online and offline model of businesses
1) Idea – Many of us think the fundamental element of any business is money.This can be true to some extent. However, idea is the key for any business. If you have money you can think about investment options like stocks, fixed deposits, CD’s, real estate to name a few. If you have an idea you can think of starting something on your own. Basically any business is built around a simple idea. IF you have both idea and money you can expand the business real quick. Money comes as supporting medium and byproduct of your hardwork. Have confidence and launch your idea today even if you are running out of resources
2) Write up – Write the steps to be followed to launch your business. This can include brainstorming sessions with your close friends, family etc. When you write the tasks to be completed you get it right most of the time
3) Prioritize – Once the steps are in place the next step is to prioritize them. This can include steps like launching business website, choosing site, pr marketing, hiring and setting up team to name a few
4) Plan – Create a plan after setting priority. This can include steps like looking for best hosting provider for your business website. In project management terms the work to be done is broken down and the work breakdown structure created in this
5) Execution– This step follows planning. The real business action starts now. This is often toughest portion of any business. Your hard work, 100% commitment, perseverance come into role in this step

6) End Result – Output of any good business is the expected end result. This usually comes in form of turnover, revenue you name it

Business launch is a process. This process can be organized with proper checklist of items that makes this process easy, simple, less complicated. Here is a template on business launch checklist


Business Name - Every language starts with alphabet. The first thing needed for a business launch is a business name. This can sound simple at first but is very important and has very big impact in long term. Here are the generic pointers you can keep in mind while deciding business name : Your business name should reflect the purpose of business - This takes into consideration product, service etc that the business is tying to accomplish. Take an example of our business name Website that talks about making a business very successful. Always remember that this business can become brand and will live for decades. Choose a name that is easy, simple, powerful that talks about the purpose you are trying to accomplish




Business Entity - To show the authenticity of your business you can make your business a legal entity from day 1. In addition to protection this will project your professional outlook in front of customers and clients. Talk to your auditor who in most cases can help you with legal business entity registration

Business Logo - You can create a simple graphic with your paint. For professional logo reflecting your brand hire freelancer logo creators to save money

Website



Each and every business demands online presence. Now-a-days a branded website has become the online face of any business. It takes you beyond horizons, lets you showcase your product (or) service across the globe at the click of a mouse. We can help you with this process


Newsletter - This is a component of email marketing which falls under digital media marketing. A simple form (or) button that will pop-up as part of your website will help you collect email details of current adn prospective customers. All the updates,news, happenings, to be launched products can be communicated properly via newsletter

Digital Media Marketing



Once the website is up and running you should start adapting proper marketing strategies to position yourself among zillions of website. Here comes digital media marketing strategies starting with content creation, content marketing, email marketing, SEO,PPC, SEM, youtube marketing, social media marketing etc 


Business Card - Physical cards that you can carry in your wallet will help you project yourself, showcase your business with each and every person you come across. Always carry atleast 10 business cards with you to grow your network

Toll Free Telephone Number - In my opinion simple google voice number is a cheapest option you can think of. This is 100% free within USA and even international calls are dead cheap. If you prefer your customers to leave voice mail setup your mailbox in phone, add some tools to website. We will talk more about this in coming posts

Official Email  - Though you have an email account with generic extensions like gmail, yahoo, ymail etc official email is something that projects you in professional band. As soon as you create a website, create an official email address soon. This can be easily manages using your gmail account as mail client. I'll talk more about this in separate post. 

Notepad and pen - An entrepreneur is a goal setter, go getter. Goals when charted clearly can be accomplished easily. Always have small simple notebook and pen/pencil in your workspace

Can branding be established with proper blogging?

 In this digital media era blogging , content creation, marketing has become inevitable for all businesses who take their business online. Today lets take a quick look at why branded businesses rely on blogs and also how a blog will help establish a brand.

First of all lets see what a brand actually means. According to wikipedia, A brand is a name, term, design, symbol or other feature that distinguishes one seller's product from those of others. To put it simple a property of a product that makes it class apart represents the brand
Now lets see how blogging goes hand in hand with branding?
Blogging is creation of articles that convey information on your business, process, service, products to name a few. This can include simple topics that show details on what your product is upto features of your product. In recent past as part of digital media marketing efforts we see that first version of release notes describing features of a brand new product has been published in online space in the form of articles.
Will creation of posts help me establish my brand?
In recent past I've come across a website inbound.org which exclusively is focused on digital media marketing space. There we find lots on jobs on content creation, SEO, social media marketers, content curators, copy writers who has good grep on digital media marketing. We see the range of companies looking for professionals who are experienced, looking to enter digital media marketing space. This includes range of organizations from startups to major brands that offer handsome salaries. These full-time roles offer full-time benefits as well. As such digital media marketing has been factored in as a component to promote, propel businesses and make it a brand
Creation of posts that talk more on product will assist in establish brand considering the following:
1) Transparency of business - Posts are easy , quick way to communicate details on what product is all about. Say when you consider posts at aaabbbcccxyz.com we talk more on all domains, training options available, real benefit of completing such training. This will help our readers assess their current proficiency level, determine if they really need the skill for growing professionally, personally. Our business is to provide insight on different skills, benefits of availing training to acquire the skill and this makes our readers come back to us and we are branded name in education and career space. Similarly, say if a company has just come up with a product say instant chat software catering towards startups, mid-size organizations it becomes essential to clearly create a post on what their business is and who the customers can be. This indirectly conveys the vision of the business and helps them derive a brand out of it
2) Inexpensive PR business promotion - In the recent past until upto last decade public relations possible via magazine, journals. A team of dedicated experts are typically involved in the PR activity. However, PR has been made easy, inexpensive via blogging platform. Now-a-days all it needs is a simple wordpress install, creation of relevant blogposts to project your product, showcase it, take it to social media including facebook, linkedin, twitter, pinterest to name a few. While creating PR in form of blog posts, unique features of product is marketed to make this a brand

AWS big data certification practice tests question and answer

 1) Can you configure your system to permit users to create user credentials and log on to the database based on their IAM credentials

a) Yes
b) No
Answer: a
Explanation : Users in amazon redshift database can logon using their normal database account as well as IAM account credentials
2) You want to secure IAM credentials for a JDBC or ODBC connection. How can you accomplish this?
a) Encrypt the credentials
b) Make use of AWS profile creating named profiles
c) Not possible
Answer: b
3) How will you directly run SQL queries against exabytes of unstructured data in Amazon S3?
a) Kinesis UI
b) SQL Developer
c) Hue
d) Redshift specturm
Answer: d
4) In terms of data write-rate for data input onto kinesis stream what is the capacity of a shard in a Kinesis stream?
a) 9 MB/s
b) 6 MB/s
c) 4 MB/s
d) 1 MB/s
Answer: d
5) EMR cluster is connected to the private subnet. What needs to be done for this to interact with your local network that is connected to VPC?
a) VPC
b) VPN
c) Directconnect
d) Dishconenct
Answer : b,c
6) Which host will you make use of to have your local network connect to EMR cluster in a private subnet?
a) bastion host
b) station host
c) vision host
d) passion host
Answer : a
7) An EMR cluster must be launched in a private subnet. Can it be used with S3 or other AWS public endpoints if it is launched in a private subnet?
a) Yes it is possible
b) No not possible
c) Need to configure NAT to make it possible
d) Need VPC for this connection
Answer: a,d
8) Your organization is going to use EMR with EMRFS. However, your security policy requires that you encrypt all data before sending it to S3 and that you maintain the keys. Which encryption option will you recommend?
a) server side encryption S3
b) client side encryption custom
c) server side encryption key management system
d) client side encryption key management system
Answer: b
9) In an EMR are core nodes optional?
a) Yes
b) No
Answer: b
Explanation : In EMR task nodes are optional and core nodes are mandate
10) Do EMR task nodes include HDFS?
a) Yes
b) No
Answer: b
11) You created a redshift cluster. You have enabled encryption in this cluster. You have completed loading 9TB of data into this cluster. Your security team makes a decision not to encrypt this cluster. You have been asked to make the necessary changes and make sure cluster is not encrypted. Waht will you do?
a) Decrypt the existing cluster with redshift modify options
b) Remove on-perm HSM module
c) Create a new cluster that is not encrypted and reload the 9TB data
d) Remove the encryption keys file and the cluster is automatically decrypted
Answer: c
12) Does AWS Key Management Service supports both symmetric and asymmetric encryption
a) Yes
b) No
Answer : b
Explanation : Only symmetric encryption is supported in AWS key management service
13) How will you encrypt EC2 ephemeral volumes?
a) Using WICKS
b) Using KICKS
c) Using LUKS
d) Using BUCKS
Answer : c
14) You will have to encrypt data at rest on instance store volumes and EBS volumes. How will you accomplish this?
a) KMS
b) LUKS
c) Open source HDFS encryptionAWS KMS is used for enrypting data at rest
Answer : b,c
15) You want to automatically setup Hadoop encrypted shuffle upon cluster launch. How can you achieve that?
a) Select the in-transit encryption checkbox in the EMR security configuration
b) Select the KMS encryption checkbox in the EMR security configuration
c) Select the on-perm HSM encryption checkbox in the EMR security configuration
d) Select the CloudHSM encryption checkbox in the EMR security configuration
Answer : a
16) Do you know what a Hadoop Encryoted Shuffle means?
a) HDFS is encrypted using cloudHSM
b) AWS KMS is used for enrypting data at rest
c) Data intransit between nodes is encrypted
d) The files in S3 are encrypted and shuffled before being read by EMR
Answer : c
17) Your security team has made it a mandate to encrypt all data before sending it to S3 and S3 will manage keys for you. Which encryption option will you choose?
a) SSE-S3
b) CSE-Custom
c) SSE-KMS
Answer : a
18) You have been asked to handle a project that has lots of python development resources. As this is totally new you have responsibility to choose the open-source tools that integrate well with Python. This is a project that does not make use of spark. Which one is recommended?
a) Jupyter Notebook
b) D3.js
c) Node.js
d) Apache Zeppelin
Answer : a
19) You have been asked to handle a project that has lots of python development resources. As this is totally new you have responsibility to choose the open-source tools that integrate well with Python. This is a project that does make use of spark. Which one is recommended?
a) Apache Zeppelin
b) Hue
c) Jupyter Notebook
d) Kinesis
Answer : a
20) Why there are no backup data files accessible for file restores while using redshift?
a) REdshift is an ephemeral storage
b) Redshift is a NoSQL database
c) Redshift is a managed service
d) Redshift is a column based database that does not support backups
Answer : c
21) If Kinesis Firehose experiences data delivery issues to S3, how long will it retry delivery to S3?
a) 7 hours
b) 7 days
c) 24 hours
d) 3 hours
Answer : c
22) You have to create a visual that depicts one or two measures for a dimension. Which one will you choose?
a) Heat Map
b) Tree Map
c) Pivot Table
d) Scatter Plot
Answer: b
23) You are looking for a way to reduce the amount of data stored in a Redshift cluster. How will you achieve that?
a) Compression algorithms
b) Encryption algorithms
c) Decryption algorithms
d) SPLUNK algorithms
Answer: a
24) How does UNLOAD automatically encrypts data files while writing the resulting file onto S3 from redshift?
a) CSE-S3 client side encryption S3
b) SSE-S3 server side encryption S3
c) ASE
d) SSH
Answer: b
25) What does area under curve mean that has a value 0.5?
a) This model is accurate
b) This model is not accurate
c) This creates lots of confidence
d) This creates less confidence beyond a guess
Answer: b,d
26) What is AUC?
a) Average unit computation
b) Average universal compulsion
c) Area under curve
d) None of the above
Answer: c
27) What does lower AUC mean?
a) improves accuracy of the prediction
c) reduces accuracy of prediction
b) mean of all predicted values
d) Mode of all predicted values
Answer: b
28) You have an auc value of 0.5. Does that mean that the guess is accurate and perfect?
a) Yes
b) No
c) Partially yes
Answer: c
Explanation: Value of 0.5 means that the guess is accurate but not a perfect guess but rather random guess
29) Can you make use of redshift manifest to automatically check files in S3 for data issues?
a) Yes
b) No
Answer: b
30) Can you control the encryption keys and cryptographic operations performed by the Hardware security module using cloudhsM?
a) Yes
b) No
Answer: a
31) You are in process of creating a table. Which among the following must be defined while table creation in AWS redshift. What are the required definition parameters?
a) The Table Name
b) RCU (Read Capacity Units)
c) WCU (Write Capacity Units)
d) DCU (Delete/Update Capacity Units)
e) The table capacity number of GB
f) Partition and Sort Keys
Answer: a,b,c,f
32) Can you run a EMR cluster in public subnet?
a) Yes
b) No
Answer: b
Explanation: Owing to compliance or security requirements we can run an EMR cluster in a private subnet with no public IP addresses or attached Internet Gateway
33) Your project makes use of redshift clusters. For security purpose you create a cluster with encryption enabled and load data into it. Now, you have been asked to present a cluster without encryption for final release what can you do?
a) Remove security keys from configuration folder
b) Remove encryption from live redshift cluster
c) Create a new redshift cluster without encryption, unload data onto S3 and reload onto new cluster
Answer: c
34) You are using on-perm HSM or cloudHSM while using security module with redshift. In addition to security what else is provided with this?
a) High availability
b) Scaling
c) Replication
d) Provisioning
Answer: a
35) CloudHSMis or on-perm HSM are the options that can be used while using hardware security module with Redshift. Is it true or false?
a) True
b) False
Answer: a
36) CloudHSMis the only option that can be used while using hardware security module with Redshift. Is it true or false?
a) True
b) False
Answer: b
37) You are making use of AWS key management service for encryption purpose. Will you make use of same keys or differnet keys or hybrid keys on case by case basis
a) same keys
b) different keys
c) hybrid keys
Answer: a
Explanation : AWS Key Management Service supports symmetric encryption where same keys are used to perform encryption and decryption
38) How is AWS key management service different than cloudHSM?
a) Both symmetric and asymmetric encryptions are supported in cloudHSM, only symmetric encryption is supported in key management service
b) CLoudHSM is used for security, key management service if for replication
c) The statement is wrong. Both are same
Answer: a
39) Which among the following are characteristics of cloudHSM?
a) High availability and durability
b) Single Tenancy
c) Usage based pricing
d) Both symmetric and asymmetric encryption support
e) Customer managed root of trust
Answer: b,d,e
40) In your hadoop ecosystem you are in shuffle phase. You want to secure the data that are in-transit between nodes within the cluster. How will you encrypt the data?
a) Data node encrypted shuffle
b) Hadoop encrypted shuffle
c) HDFS encrypted shuffle
d) All of the above
Answer: b
41) Your security team has made it a mandate to encrypt all data before sending it to S3 and you will have to maintain the keys. Wich encryption option will you choose?
a) SSE-KMS
b) SSE-S3
c) CSE-Custom
d) CSE-KMS
Answer : c
42) Is UPSERT supported in redshift?
a) Yes
b) No
Answer: b
43) Is single-line insert fast and most efficient way to load data into redshift?
a) Yes
b) No
Answer : b
24) Which command is the most efficient and fastest way to load data into Redshift?
a) Copy command
b) UPSERT
c) Update
d) Insert
Answer : a
45) How many concurrent queries can you run on a Redshift cluster?
a) 50
b) 100
c) 150
d) 500
Answer : a
46) Will primary and foreign key integrity constraints in a redshift project helps with query optimization?
a) Yes. They provide information to optimizer to come up with optimal query plan
b) No . They degrade performance
Answer : a
47) Is primary key and foreign key relationship definition mandatory while designing Redshift?
a) Yes
b) No
Answer : b
48) REdshift the AWS managed service is used for OLAP and BI. Are the queries used easy or complex queries?
a) Simple queries
b) Complex queries
c) Moderate queries
Answer : b
49) You are looking to choose a managed service in AWS that is specifically designed for online analytic processing and business intelligence. What will be your choice?
a) Redshift
b) Oracle 12c
c) amazon Aurora
d) Dynamodb
Answer : a
50) Can Kinesis streams be integrated with Redshift using the COPY command?
a) Yes
b) No
Answer : b
51) Will Machine Learning integrate directly with Redshift using the COPY command?
a) Yes
b) No
c) On case by case basis
Answer : b
52) Will Data Pipeline integrate directly with Redshift using the COPY command?
a) Yes
b) No
Answer : a
53) Which AWS services directly integrate with Redshift using the COPY command
a) Amazon Aurora
b) S3
c) DynamoDB
d) EC2 instances
e) EMR
Answer : b,c,d,e
54) Are columnar databases like redshift ideal for small amount of data?
a) Yes
b) No
Answer : b
Explanation : They are ideal for OLAP that does process heavy data loads called data warehouses
55) Which databases are best for online analytical processing applications OLAP?
a) Normalized RDBMS databases
b) NoSQL database
c) Column based database like redshift
d) Cloud databases
Answer : c
56) What is determined using F1 score?
a) Quality of the model
b) Accuracy of the input data
c) The compute ratio of Machine Learning overhead required to complete the analysis
d) Model types
Answer : a
Explanation : F1 score can range from 0 to 1. If 1 is f1 score the model is of best quality
57) Which JavaScript library lets you produce dynamic, interactive data visualizations in web browsers?
a) Node.js
b) D3.js
c) JSON
d) BSON
Answer : b
58) How many transactions are supported per second for reads by each shard?
a) 500 transactions per second for reads
b) 5 transactions per second for reads
c) 5000 transactions per second for reads
d) 50 transactions per second for reads
Answer : b
59) Where does Amazon Redshift automatically and continuously backs up new data to ?
a) Amazon redshift datafiles
b) Amazon glacier
c) Amazon S3
d) EBS
Answer : c
60) Which one acts as an intermediary between record processing logic and Kinesis Streams?
a) JCL
b) KCL
c) BPL
d) BCL
Answer : b
61) What to do when Amazon Kinesis Streams application receives provisioned-throughput exceptions?
a) increase the provisioned throughput for the DynamoDB table
b) increase the provisioned ram for the DynamoDB table
c) increase the provisioned cpu for the DynamoDB table
d) increase the provisioned storage for the DynamoDB table
Answer : a
62) How many records are supported per second for write in a shard?
a) 1000 records per second for writes
b) 10000 records per second for writes
c) 100 records per second for writes
d) 100000 records per second for writes
Answer : a
63) You own an amazon kinesis streams application that operates on a stream that is composed of many shards. Will default provisioned throughput suffice?
a) Yes
b) No
Answer : b
64) You have an Amazon Kinesis Streams application does frequent checkpointing . Will default provisioned throughput suffice ?
a) Yes
b) No
answer : b
65) What is the default provisioned throughput in a table created with KCL?
a) 10 reads per second and 10 writes per second
b) 100 reads per second and 10 writes per second
c) 10 reads per second and 1000 writes per second
Answer : a
66) You have configured amazon kinesis firehose streams to deliver data onto redshift cluster. After sometime in amazon s3 buckets you see manifest file in an errors folder. What could have caused this?
a) Data delivery from Kinesis Firehose to your Redshift cluster has failed and retry did not succeed
b) Data delivery from Kinesis Firehose to your Redshift cluster has failed and retry did succeed
c) This is a warning alerting user to add additional resources
d) Buffer size in kinesis firehose needs to be manually increased
Answer : a
67) Is it true that if amazon kinesis firehose fails to deliver to destination owing to the fact that buffer size is insufficient manual intervention is mandate to fix the issue?
a) Yes
b) No
Answer : b
68) What does amazon kinesis firehose do when data delivery to the destination is falling behind data ingestion into the delivery stream?
a) system is halted
b) firehose will wait until buffer size is increased manually
c) Amazon Kinesis Firehose raises the buffer size automatically to catch up and make sure that all data is delivered to the destination
d) none of the above
Answer : c
69) Your amazon kinesis firehose data delivery onto amazon s3 bucket fails. Automated retry has been happening for 1 day every 5 seconds. The issue is not found to have been resolved. What happens once this goes past 24 hours?
a) retry continues
b) retry does not happen and data is discarded
c) s3 initiates a trigger to lambda
d) All of the above
Answer : b
70) Amazon kinesis firehose has been constantly delivering data onto amazon S3 buckets. Kinesis firehose retires every five seconds. Is there a maximum duration until which kinesis keeps on retrying to deliver data onto S3 bucket?
a) 24 hours
b) 48 hours
c) 72 hours
d) 12 hours
Answer : a,b
71) Amazon kinesis firehose is delivering data to S3 buckets. All of sudden data delivery to amazon S3 bucket fails. In what interval does a retry happen from amazon kinesis firehose?
a) 50 seconds
b) 500 seconds
c) 5000 seconds
d) 5 seconds
Answer : d
72) How is data pipeline integrated with on-premise servers?
a) Task runner package
b) there is no integration
c) amazon kinesis firehose
d) all the above
Answer : a
73) Is it true that Data Pipeline does not integrate with on-premise servers?
a) True
b) False
Answer : b
74) Kinesis Firehose can capture, transform, and load streaming data into which of the amazon services?
a) Amazon S3
b) Amazon Kinesis Analytics
c) Amazon Redshift
d) Amazon Elasticsearch Service
e) None of the above
Answer : a,b,c,d
75) Which AWS service does a Kinesis Firehose does not load streaming data into?
a) S3
b) Redshift
c) DynamoDB
d) All of the above
Answer : c
76) You perform write to a table that does contain local secondary indexes as part of update statement. Does this consume write capacity units from base table?
a) Yes
b) No
Answer : a
Explanation : Yes because its local secondary indexes are also updated
77) You are working on a project wherein EMR makes use of EMRFS. What types of amazon S3 encryptions are supported?
a) server-side and client-side encryption
b) server-side encryption
c) client-side encryption
d) EMR encryption
Answer : a
78) Do you know which among the following is an implementation of HDFS which allows clusters to store data on Amazon S3?
a) HDFS
b) EMRFS
c) Both EMRFS and HDFS
d) NFS
Answer : b
79) Is EMRFS installed as a component with release in AWS EMR?
a) Yes
b) No
Answer : a
80) EMR cluster is connected to the private subnet. What needs to be done for this to interact with your local network that is connected to VPC?
a) VPC
b) VPN
c) Directconnect
d) Dishconenct
Answer : b,c

Netbackup Interview Question Answer Preparation

 1) A database administrator requires that a custom script be executed prior to the

backup of the database. This custom script requires approximately 20 minutes to
execute. Which two configuration options must be modified to allow this operation in a veritas netbackup environment?
DB_EXECUTE_TIMEOUT in vm.conf, BUSY_FILE _PROCESSING in bp.conf
2) Which NetBackup process is responsible for volume recognition on a media server?
vmd
3) What options can be used to expire physical media in a veritas netbackup environment?
date, change of the backup retention level
4) Which command can unfreeze a tape with the media 10 of ABC123 in netbackup?
bpmedia -unfreeze -m ABC123
5) What is required for a SSO to work in netbackup environment?
A SAN is required for SSO to work
6) Which netbackup commands can be used to check and update the volume configuration?
vmchange, vmupdate
7) A new policy has been created with Full and Incremental schedules. The volume
pool has been configured and named Pool A. The Incremental schedule is using the
tapes out of the Full volume pool, not the DailyVolume pool. What can be done to have the Incremental schedule use the PoolA volume pool?
Policy> Schedules> Incremental> Attributes> Override policy volume pool> PoolA
8) Which veritas netbackup command can be used to bring a down drive to an UP state?
vmoprcmd
9) Which factors can determine the number of data streams that can run concurrently?
Multiplexing setting, MAX STREAMS setting, maximum jobs per client setting
10) Which methods automate notification about the status of all backups in veritas netbackup environment?
Modify backup_exit_notlfy.cmd for e-mail notification, Set the e-mail address in host properties
11) What is the appropriate sequence of steps needed to properly implement the VxVM Snapshot Method?
Here are the sequence of steps needed to implement VxVM snapshot :
11.1) Invoke VxVM to establish snap mirror prior to backup
11.2) Perform the backup from the snapshot volume
11.3) After the backup has been completed, invoke VxVM to re-synchronize the snap mirror
12) What is the use of commands vmcheck, bpupdate, vmcheckxxx ?
vmcheck is used to check backup created, bpupdate is used to is used to update the timing and such properties of backup, vmcheckxxx is used to check a backup piece based on timestamp
13) To restore data that was backed up to a NearStore VTL and then off to tape do you need to restore from tape to the NearStore VTL before restoring the data to the server?
No need
14) Give details about OSSV :-
Whilst each OSSV transfer is incremental, each view on the secondary (i.e snapshot) is seen as
a full backup. You can manage OSSV with CLI the command line interface tools, or through DataFabric Manager, or via third party tools such as Syncsort/Bakbone/CommVault
15) What is the largest unit that you can backup with SnapVault?
Volume
16) A customer with OSSV has used the SnapVault/snapMirror bundle to convert their OSSV
destination to R/W for testing. The testing was a success and they would like to copy their
changes back to the OSSV primary. How can this be done?
This is not a possible scenario
17) A customer wants to use SnapVault to backup snapshots from a traditional volume. Which one of the following will impact SnapVault performance?
Frequency of SnapVault transfers
18) Customer wants to do a nightly backup that is locked down for compliance purposes. Which
product will you use?
LockVault

19) A customer has backed up a SnapVault volume on a SnapVault secondary to tape. The steps to recover data from tape to the SnapVault primary will be:
1) Tape->secondary volume->primary volume
2) Tape->primary volume
20) A customer has an Oracle Database with a RTO of 1 hour, but a RPO of 5 minutes. How can they achieve this with minimal WAN traffic?
SnapVault the Database files every hour and SnapMirror the logs every 5 minutes
21) How can you make a SnapVault destination volume read-write?
SnapMirror/SnapVault bundle
22) Can the NearStore VTL can be added to an existing Fibre Channel Fabric without removing the Tape device?
Yes
23) A customer has a volume with 5 qtrees and assorted directories and files not in the qtrees. How
24) can the customer backup that data not in the qtrees?
Use SnapVault command line
25) A customer has a volume on a filer that contains 10 qtrees. All but one of the qtrees have similar change rates and size. The volume is scheduled to transfer every morning at 1 AM. The 9 qtrees with similar characteristics all transfer within 15 minutes and the tenth qtree takes two hours to complete. When will the SnapVault transfer be completed for those qtrees?
03:05 hours
26) Customer wants to run 40 OSSV relationships concurrently. Which system can support this as a destination with the Near store personality?
FAS3050
27) What is the minimum RPO available using SnapVault schedules?
1 hour
28) Can SnapVault can replicate data over Fibre Channel between the primary and secondary storage systems?
Nope.Not possible
29) How can we tune a NearStore VTL system?
Nothing, the system is auto-tuning.
30) What would you use to get a centralized view or SnapVault backups in a NetApp FAS environment?
Data Fabric Manager
31) What is required to ensure that a DataFort configuration is ready for disaster recovery?
DataFort current database, Recovery Cards, Recovery Card password
32) When are Recovery Cards required in the trustee key-sharing process?
During the establishment of only the trustee relationship
33) Give details on disk and tape I/O in Datafort environment :-
Disk and tape I/O cannot be combined through the same standalone DataFort. Disk and tape I/O cannot be combined through the same cluster.
34) How many additional FC-switch ports are required for an inline direct-attached FC-5XX DataFort?
0
35) If your recovery schema is two out of five, how many Recovery Cards do you need to remove a cluster member?
0
36) What are the indications a media type of duplex mode mismatch on the DataFort admin
interface?
The cluster continually aborts and reforms. The DataFort techdump takes a very long time to complete.
37) In SAN , you are restoring a deleted Cryptainer that was created in SAN . How can you find the keyID for the encrypted LUN?
By using the CLI command fcc key list
38) What is the purpose of the Key Policy setting for tapes?
It determines whether or not each tape has a unique key
39) Which factors should be considered when assessing the performance impact a DataFort
will have on an existing environment?
The number of devices connected to a specific DataFort
The combined speed of all devices connected to the DataFort
The device type, tape or disk, being connected to the DataFort
40) If your disk array will support 150 MB/s, approximately how long will it take you to complete the Encrypt Empty Disk Task on 100TB of disk space?
1 second
41) The E-Series DataFort Local ACL feature is designed to prevent unauthorized Windows
administrators from gaining Cryptainer access by adding themselves to which one?
The share ACL
42) Give some details on Cryptainer :-
One Cryptainer is created per LUN.
The original LUN ordering is always preserved for Port Mapped Cryptainer vaults
43) Which SCSI signaling and connector type is used by an S-Series DataFort?
LVD signal, VHDCI connector type
44) Which defense setting requires zeroization after rebooting with a low battery condition?
High
45) Which step is automatically performed when using host virtualization?
Granting Cryptainer access to virtualized hosts
46) Which filtering options does the network capture capability on the DataFort allow?
Filter on IP address, Filter on DNS hostname, Filter on port number, Filter on IP protocol, Filter using "and," "or" and "not"
47) What can the E-Series DataFort net util tcpdump command capture?
Network packets going between DataFort client-side/server-side NIC and arbitrary machines
48) Which actions remove key material?
Zeroization of DataFort with destruction of its System Card, Deletion of all manually saved configdb files
49) Which settings do you use to enable the tape error recovery extension to the FC protocol?
dfc.disable_host_fc_tape; dfc.disable_storage_fc_tape
50) What can be used to share key material between different DataFort clusters?
A trustee relationship with key export/import, DataFort cloning, Key Translation in LKM software, Key Translation in LKM appliance
51) Give details about the DataFort appliance remote logging capabilities :-
All DataFort logs can be configured to be sent to a remote Windows-based server. DataFort log storage locations can be configured from the GUI or the CLI. DataFort logging can be configured to be sent to a remote syslog server.
52) Which FIPS certification does the FC520 have?
140-2, Level 3
53) Which two upgrades produce a zeroization of DataFort?
Upgrading from SAN 1.6 to SAN 2.2.0.1
Upgrading from SAN 1.8 to SAN 2.1
54) How can we verify a customer disaster recovery configuration ?
Be verified as functional before DataFort appliances are installed
55) Which methods allow movement of keys to two standalone LKM servers/appliances?
Establish link between LKM1 and LKM2 appliances
Register DataFort to both LKM servers and manually initiate a backup to LKM2
56) How do you ensure that operation trace logging is enabled?
Through the Logging Configuration page of the WebUI
By using the CL sys prop get command to view the setting of the sys.proc.syslogd.conf.op_trace property
57) What happens when the DataFort detects an intrusion?
The DataFort stops encrypting and decrypting data.
58) When should you use DataFort cloning as a key-sharing best practice?
Global default pool deployment
59) Which Solaris-specific file is used for target discovery for tape devices?
/kernel/drv/st.conf
60) Which two FC DataFort deployment options are supported?
Both storage and host ports connected to the same switch
The storage port connected to an edge switch and the host port connected to a core switch
61) In FC-Series 2.x, which two applications recognize application pool labels?
Symantec/Veritas NetBackup
EMC/Legato NetWorker
62) Which are the backup types that are available in Netbackup?
Differential Incremental Backup
Cumulative Incremental Backup
User Backup
User Archive
Full Backup
63) What is media multiplexing?
The process of sending concurrent-multiple backups from one or more clients to a single storage device and interleaving those images onto the media

64) What is avrd ?
The automatic volume recognition daemon on UNIX and process on Windows
65) What is bpadm?
An administrator utility that runs on NetBackup UNIX servers. It has a character-based, menu interface that can be run from terminals that do not have X Windows capabilities
66) What is a frozen image ?
A stable disk copy of the data prior to backup. A frozen image is created very rapidly, causing minimal impact on other applications. There are two basic types: copy-on-write and mirror
67) What is retention period ?
The length of time that NetBackup keeps backup and archive images. The retention period is specified on the schedule.
68) What is tape spanning ?
Using more than one tape to store a single backup image
69) What is TLD,TLH,TLM,TL4,TL8?
TLD-Tape Library DLT
TLH-Tape Library Half-inch
TLM-Tape Library Multimedia
TL4-Tape Library 4MM
TL8-Tape Library 8MM
70) What is the full form of DLT?
Digital-linear tape or tape drive type
71) What is image expiration and volume expiration?
Image expiration is the date and time when NetBackup stops tracking a backup image. Volume Expiration is the date and time when the physical media (tape) is considered to be no longer usable
72) What is Commandline to shutdown netbackup services on Windows Server?
bpdown
73) Which command is used to produce a report on Status of Netbackup Images?
bpimagelist
74) What is the use of command vmupdate?
Inventory the media contents of a robotic library
75) Commandline for Creating a copy of backups created by NetBackup?
bpduplicate
76) If you wanted to bypass netbackup commands and move a tape from slot 1 to drive 3 how would you do that?
robtest s1 d3
77) If I asked you to tell me if a client has Netbackup on it just by using a telnet command what would you do?
Telnet command used to check the bpcd port is running on the client
Telnet < host name> bpcd /13782
78) If you wanted to know what IP address netbackup was using to perform backups what command would you run and where would you run it?
use bpclntcmd –pn from master server to the client
Bpcover –r < host name>
79) If a media ID A04567 comes back and it is frozen, what are the steps to unfreeze it and move it back to scratch from the command line?
bpmedia –unfreez < Media ID>
Through GUI , right click on the media to move to scratch volume pool
80) What is the client version supported by NBU 6.x and 5.x masters?
The version support from 6.x to 4.5
81) What is the process of importing images and why do we import images?
There are two phases.
Phase I:  It builds the skeleton metadata
Phase II: It allows you to select which images to actually complete the import process
82) There are 1000 Client machines , 999 machines are transferring datas in good speed but one client machine is taking too long to transfer a datas .That is backup should complete within 2 hours but after 12 hours and more the data transfer is still happening why?
Please check the Network bandwidth. BPTM,bpbrm on both master and media server and bpbkar and bpcd logs on clients. I am appreciated if any one give more information on this.
83) There is a Tape library with 10 drives.Can we able to create 2 Storage units?
As per my knowledge. N no. of storage units you can create. But mail criteria to create a storage unit is.
1: It should be same robot controller
2: It should be same library
3: It should be same media server connected
4: It should be same Density

SQL.BSQ Script in sql server

 1) What is the use of SQL.BSQ Script?

SQL.BSQ Script is run to create base tables,roles at the database creation time in sql server database.
sql.bsq script creates the following roles in the database - connect,resource,dba,select_catalog_role,delete_catalog_role,execute_catalog_role
Role is the grouping of privileges that can be granted to users that restricts access at database level
As privileges are grouped they can be granted and revoked simultaneously
2) In SQL Server how will you export Logins to Report RPT file?
SQL Server offers unique benefit of exporting all its logins onto a report file which is usually stored in rpt format. This is an important task to be completed before SQL Server update install (or) upgrading SQL server as a whole. This proactive measure, preserves logins in case of upgrade failure (Mostly updates have less impact on logins but can't be discounted)
Perform the following steps to import logins onto report file
1) First try to find details on logins in server as follows
select * from syslogins
2) Now highlight the above SQL, right click and choose option that says, results to file. Click Execute
3) Save the report file that is of rpt format
To get back results in SSMS choose results grid
3) What are many different Startup Options in SQL Server?
Startup options are the values that are installed as part of system registry when we install sql server software using microsoft sql server installer. These parameters can be modified using SQL Server configuration manager
Changes to startup options require system restart
Startup options resemble init.ora/spfile.ora parameters in oracle and session variables in mysql databases
To access sql server configuration manager go to :
Start->Microsoft SQL Server 2016->SQL Server Configuration Manager
4) Why does SQL Server Differential Restore Fails with lsn missing error?
SQL server comes with interesting differential backup and restore option using which with a fullbackup, followed by differential backup and transactional logs we will be able to perform restore and recovery
Differential backups are cumulative (ie) created from last full backup and hence consume space. Design backup destination of differential backups acordingly
Using SQL Server Management Studio (SSMS) we get LSN chain breaking problem on a differential restore following full restore. This is a result of bug in this interface. Try using Query window and perform differential restore using query and we see that it works fine
5) How to enable Filestream At Database Level Using T-SQL in a sql server?
By default filestream feature is not enabled at database level upon installation or upgrade of SQL Server. filestream is needed if we wish to make use of interesting SQL Server feature like Filetables. Here is the simple step to enable filestream feature at database level using T-SQL
exec sp_configure filestream_Access_level,2
reconfigure
go
The sp_configure package will modify the values and enable filestream access. The output will show that filestream_access_level parameter value has been changed from original 0 to latest 2 value
6) Are shared locks compatible with exclusive locks?
Exclusive locks are used with DML whereas shared locks are related to read only operations like select
7) What is syntax of query? Which statement is used to query DB?
select [column|* for all columns] from tablename;
8) How to get details on pattern in sql server DB?
The simple function PATINDEX can be used
PATINDEX('pattern','string') provides details on occurence of pattern in the given string
9) What function is used for case conversion in sQL SERVER?
lower() - converts the given data to lower case
upper() - converts given data to upper case
10) How do you concatenate two or more strings in sql server?
SQL server comes with a function concat() that helps us accomplish it real easy
11) What happens when ansi_nulls are set to on?
In case of a select statemetn that has where condition column=null will return zero even if there are null values in column name. To set this use
set ansi_nulls=off
12) How is charindex() used?
This function searches an expression (set of words) for another word and returns details on its starting position
13) What is basic difference between delete and truncate?
Delete is DML and truncate is DDL and executes fast as it is irreversible and doesnt log informaiton on every row deleted
14)Give details on some sql server ranking functions
rank(), dense_rank(),over(),ntile(), are good examples of this
15) What happens during creation of a unique constraint?
An uniqueindex is created by default when a unique contraint is created. This prevents duplicate keys. Same is the case with primary key creation
16) What is SQL Server Piecemeal Restore ?
This is an interesting restore and recovery option that allows quick availability of DB
How does piecemeal restore expedite DB availability?
Piecemeal restore starts with initial restore of important filegroups followed by secondary filegroups restore later on
We can use with partial option to perform piecemeal restore
Query the master_files table from sys database to get more details during this piecemeal restore
select name,state_desc,create_lsn,redo_start_lsn from sys.master_files where database_id=db_id('DBNAME')

Walgreens prescription history for taxes

 Now, it is possible to access your prescription records as well as you past payment history online form Walgreens. This is an interesting feature offered for free by Walgreens Pharmacy

Why do I need to access these records?

You can access details on prescription, payment history that can be used for filing your taxes during the tax season

If you own HSA (Health Savings Account) account if there is an audit this comes handy. This proof can be used for validation of payments made towards medicine purchase from Walgreens

FSA (Flexible Spending Accounts) reimbursements mandates prescriptions that can be accessed from the portal

If you would like to share details on your health with family, friends this comes handy

How do I access my prescription records and payment history?

Navigate to the website of <a href="https://www.walgreens.com/login.jsp" target="_blank" rel="noopener noreferrer">Walgreens Pharmacy</a>

Register for free and access details from within website


Storage Technology Interview Question Answer

 1) Which remote replication method is most affected by the distance between source and target?

Synchronous
2) Which component in an EMC Symmetrix Array provides physical connectivity to hosts?
Front-end adapter
3) You have five disks in a RAID 0 set, and have assigned a stripe depth of 128 blocks (64 kB).What is the STRIPE size (kB)?
320
4) A five disk set is configured as RAID 5. Each disk has a Mean Time Between Failure (MTBF) of 500,000 hours. What is the MTBF of the set?
100,000 hours
5) What are the three key data center management activities that are interdependent?
Provisioning, Monitoring, Reporting
6) Data is being replicated from site A to site B using disk buffering to create extended distance consistent point in time copies every hour. In the event of a site failure at A, what is the maximum amount of data that will be missing at site B?
2 hours
7) What is the most likely cause when there are no devices available to the HOST even though the HBAs are logged into the FA?
No Symmasking
8) What is a key requirement of Content Addressed Storage?
Longevity
9) What are Virtual DataMovers [VDMs]?
They are dedicated CIFS servers
10) In a Centera, what is the minimum node configuration of Generation 4 Hardware?
4
11) What is use of WINS?
Resolves UNC names to IP address
12) Give details about Celerra SRDF/A :-
Is implemented over campus distances only , Is bi-directional , Supports an automatic failover and failback process
13) In a Centera, how many internal switches are in a two cube Generation 4 cabinet?
4
14) Which IP address is used to discover Domain ID 1 from SAN Manager?
192.168.24.28
15) What would resolve a conflict where the physical Switch/Director recognized a port login, while SAN Manager does not report it as logged in?
Discover the switch
16) What is default behavior of B-Series Switch/Director?
Switch acts like a HUB. All ports have access to each other
17) What is the command for modifying the Domain ID of a B_Series Switch/Director?
configure
18) What is the CLI command for backing up a B-series Switch/Director configuration?
configupload
19) Which parameters must be the same in switches merged into one fabric?
R_A_TOV, E_D_TOV
20) What occurs when zoning is disabled in a B-Series fabric?
All ports see each other
21) How many faulty readings must occur for B-Series Internet explorer view to show a critical status?
3
22) Which tool can be used to manage zoning in a B-Series Fabric?
Fabric Manager
23) Which view in B-Series Web Tool provides the fastest way to determine if an HBA has performed A fabric login?
Name Server
24) Which HP-UX command can be used to discover newly added LUNs?
ioscan -fnc dik
25) Which file extension is used when saving a data collection from M-series Switch/Directors?
zip
26) Give details on ControlCenter Storage pools:-
Groups of storage devices that SPS will search for available storage
27) What are valid Fabric topologies?
Mesh, Core-Edge
28) What is a zoneset?
A collection of zones
29) Which EMC Symmetrix connectivity options cannot be used with mainframe hosts?
Fibre Channel, SCSI
30) Which EMC Symmetrix connectivity options can be used with mainframe hosts?
FICON,ESCON
31) What are ControlCenter Storage policies in EMC?
Rules containing all of the general criteria for storage allocation requests
32) Which of the following ControlCenter application can you use to identify currently
allocated and utilized storage?
StorageScope
33) What is the purpose of the ControlCenter Console?
It allows environment management
34) How are disks accessed by UNIX hosts?
Using device special files
35) Give details on iSCSI naming conventions :-
iQN, EUI
36) How does EMC TimeFinder/Snap save disk space while providing full access to the original data on the snapshot copy?
It does so by creating pointers to the original data
37) What is a feature of both EMC TimeFinder/Mirror and TimeFinder/Mirror Clone?
Identical storage capacity as the Source device
38) Does EMC TimeFinder/Mirror Clone use copy on access?
Yes
39) What does Visual storage view in EMC controlcenter console display?
EMC ControlCenter Console displays Front End Directors, Disk Directors, and all the devices for a Symmetrix array
40) When is a data accessible on snap copy in EMC TimeFinder/Snap?
Data is accessible on the Snap copy immediately after the Snap session is initiated
41) What is the difference between EMC NSXXXG and the NSXXX?
NSXXXG allows the sharing of back-end storage with other hosts
42) How does EMC TimeFinder/Snap save disk space while providing full access to the original
data on the snapshot copy?
It creates pointers to the original data
43) How is free space in the EMC CLARiiON write cache maintained?
The cache is flushed to the drives during I/O bursts and only the least-recently used pages are flushed
44) Which of the following recovery methods is used after a EMC Data Mover failover?
Manual recovery
45) What does NDMP represent ?
Network Data Management Protocol
46) Is write intent lo mandate while using EMC Synchronous MirrorView?
No. It is optional
47) Which protocol uses TCP/IP to tunnel Fibre Channel frames?
FCIP
48) What is the correct sequence for starting a ControlCenter component?
Repository, Server, Store
49) Which EMC Symmetrix system uses a two bus architecture that is referred to as an "X" and a "Y" bus?
Symmetrix 4.8
50) What does EMC SRDF/Automated Replication provide?
SRDF/Automated Replication provides a copy of the data on the Target device, which will be several minutes to hours behind the Source
51) Which series of switches can you use EMC Connectrix Manager to manage?
M-Series
52) What is necessary for using EMC control center Automatic Resource Strong provisioning Series?
Storage pools and storage policies
53) What is characteristic of Network Attached Storage (NAS)?
Clients perform file level I/O to access data on a NAS device
54) Which load balancing policy available in Power Path relates to Path fail over only?
Request
55) In Control Center, what does the EMC Time Finder/ SDRF QoS Task Bar do?
Performance Management

Woocommerce interview question and answer

 Woocommerce the most popular shopping software is a free option that can be installed very easily as plugin in most popular simple content management system wordpress. We will discuss more details on woocommerce interview questions:

1) What is woocommerce?
If you want to go online, create an ecommerce website in less than 2 minutes install wordpress and make use of an interesting plugin called woocommerce. Woocommerce is the most popular software used for implementing shopping cart, creation of a catalogue of products, online shopping mall and much more. If you want to convert a website to an e-commerce retail shop easily and instantly make use of woocommerce plugin
2) Can you install and make use of woocommerce directly?
No, woocommerce is not a standalone software. Woocommerce is installed as a plugin on top of wordpress installation
3) Do you need to pay money for woocommerce plugin?
Nope. Install wordpress in your hosting account. Navigate to plugins, search for woocommerce plugin. Install this plugin for free, configure settings and save settings after this is activated
4) What feature is added to your website after adding woocommerce plugin?
Instantly the website becomes e-commerce website. A shop is added to the website that offers unique opportunity to add products, categorize products, set sale price, mention the product type including simple, variable, downloadable products, set product attributes and lots more. As the website becomes shopping mall online, account page, cehckout page, cart to add and checkout products are all added automatically
5) How is product added to woocommerce shop?
Log into administrator dashboard in wordpress. In left hand side products menu appears below woocommerce menu. Click on products menu, add a new product,set the attributes of products, choose what type the product is like simple, variable, virtual, downloadable, product price, sale price if different, product category on right hand side same way you set for regular wordpress posts and pages and publish the product
6) Where can you track your orders in your woocommerce shop?
Log into administrator dashboard in wordpress. In left hand side woocommerce menu appears. Click on this and orders option is available in this menu. We can see numbers of open orders if any. IF no order is present check details on old orders. The orders can be in many different status like in processing, completed, on hold etc
7) In your woocommerce implementation you get an error that says no payment methods for state. How will you start your investigation?
Typically this can be result of Security implementation in-place like SSL cert. Another possibility is to look for payment methods being configured, currency options etc
8) What does woocommerce this order cannot be paid for mean?
As the error indicates if no products have been added to order, or else no products more than $0 has been added to the order this issue does crop up
9) What causes error Woocommerce This order’s status is “Pending payment” it cannot be paid for. Please contact us if you need assistance.?
I'm in process of creating e-commerce for one of our client projects. This customer has woocommerce as their e-commerce solution in place. I'm in process of testing the invoice portion of the project. As expected invoice is mailed to customer without any issues. This client has requirement to attach PDF invoice as part of email which is working fine without any issues
The payment gateway used for testing is paypal which is again properly configured. So, no problem with paypal as well.
Once the pay link is clicked from within test email, I got the following error:
This order’s status is “Pending payment”—it cannot be paid for. Please contact us if you need assistance.
Interestingly I found that this is a result of the following issues:
1) No product has been added to order. As a result the invoice amount is $0
2) Products are added to the order. The Pricing of these products is not set to value more than $0. Again this leads to situation where order total is $0
I added product, updated the order, resent the invoice to the test email to simulate customer situation. As expected this fixed the issue and I'm able to test paypal payment from this invoice checkout

Reason behind hadoop error could not find or load main class fs

 As a first step in learning hadoop, I'm currently reading hortonworks tutorial to mirror datasets between hadoop clusters with apache falcon. I stumbled on error could not find or load main class fs

As a first step I tried logging in as falcon user
su - falcon
Now,I'm trying to create directory using hdfs command as provided in tutorial:
hadoop fs -mkdir /apps/falcon/primaryCluster
Error: could not find or load main class fs
As a next step I tried setting hadoop_prefix as follows
export HADOOP_PREFIX=/usr/hdp/current/hadoop-client
This did not fix the issue either
As a next step instead of fs I tried using dfs
hadoop dfs -mkdir /apps/falcon/primaryCluster
It did work fine
To confirm that folder got created I issued the following command and this did work fine
hdfs dfs -ls /apps/falcon
1) What are all the datasources from which we can load data into Hadoop?
Hadoop is an open source framework for supporting distributed data and processing of big data. First step would be to pump data into hadoop. Datasources can come in many different forms as follows:
1) Traditional relational databases like oracle
2) Data warehouses
3) Middle tier including web server and application server - Server logs from major source of information
4) Database logs
2) What tools are mainly used in data load scenarios?
Hadoop does offer data load into and out of hadoop from one or more of the above mentioned datasources. Tools including Sqoop, flume are used with data load scenarios. If you are from oracle background think of tools like datapump, sql*loader that help with data load. Though not exactly the same logicwise they match
3) What is a load scenario?
Bigdata loaded into hadoop can come from many different datasources. Depending on datasource origin there are many different load scenarios as follows:
1) Data at rest - Normal information stored in files, directories, sub-directories are considered data at rest. These files are not intended to be modified any further and are considered data at rest. To load such information HDFS shell commands like cp, copyfromlocal, put can be used
2) Data in motion - Also called as streaming data. This is a type of data that is continuously being updated. New information keeps on getting added to the datasource. Logs from webservers like apache, logs from application server, database server logs say alert.log in case of oracle database are all examples of data in motion. It is to be notes that multiple logs need to be merged before being uploaded onto hadoop
3) Data from web server - Web server logs
4) Data from datawarehouse - Data should be exporeted from traditional warehouses and imported onto hadoop. Tools like sqoop, bigsql load, jaql netezza can be used for this purpose
3) How does sqoop connect to relational databases?
Information stored in relational DBMS liek Oracle, MySQL, SQL Server etc can be loaded into Hadoop using sqoop. As with any load tool, sqoop needs some parameters to connect to RDBMS, pull information, upload the data into hadoop. Typically it includes
3.1) username/password
3.2) connector - this is a database specific JDBC driver needed to connect to many different databases
3.3) target-dir - This is the name of directory in HDFS into which information is loaded as csv file
3.4) WHERE - subset of rows from a table can be exported and loaded using WHERE clause

Storage Area Network SAN interview questions

 How is a SAN different from an ethernet network?

Both SAN and ethernet network use the same set of devices including switches, cables for connectivity.Ethernet network uses TCP/IP protocol.SAN uses fiber channel for transmission of data. Protocols popularly used by SAN switches are FCP(Fiber Channel Protocol), SCSI (Small Computer System Interface), iSCSI (Internet SCSI - SCSI+TCP)
What is a SAN?
SAN stands for Storage Area Network.It referes to set of storage devices attached to a LAN or an ethernet network.A network dedicated for the purpose of storage is referred to as a SAN
SAN Storage Area Network Components :
What are the major grouping of SAN(storage area network) components?
SAN(storage area network) components can be categorized under following three major categories:
1) Host Components
2) Fabric Components
3) Storage Components
Give details on host components:-
The host components of a SAN consist of the servers themselves and the components that enable the servers to be physically connected to the SAN:
1. Host bus adapters (HBAs) are located in the servers, along with a component that performs digital-to-optical signal conversion. Each host connects to the fabric ports from its HBA.
2. Cables connect the HBAs in the servers to the ports of the SAN fabric.
3. HBA drivers run on the servers to enable a server’s operating system to communicate with the HBA.
Give details on Fabric components:-
All hosts connect to the storage devices on the SAN through the fabric of the SAN.The actual network portion of the SAN is formed by the fabric components.The fabric components of the SAN can include any or all of the following:
1.Data Routers
2.SAN Hubs
3.SAN Switches
4.Cables
Give details on data routers:-
Data routers provide intelligent bridges between the Fibre Channel devices in the SAN and the SCSI devices. Specifically, servers in the SAN can access SCSI disk or tape devices in the SAN through the data routers in the fabric layer.
What are SAN Hubs?
They are precursors to today’s SAN switches. A SAN hub connects Fibre Channel devices in a loop (called a Fibre Channel Arbitrated Loop, or FC-AL). Although some current SANs may still be based on fabrics formed by hubs, the most common use today for SAN hubs is for sharing tape devices, with SAN switches taking over the job of sharing disk arrays.
What are SAN Switches?
SAN switches are at the heart of most SANs. SAN Switches can connect both servers and storage devices, and thus provide the connection points for the fabric of the SAN.
What are modular switches?
For smaller SANs, the standard SAN switches are called modular switches and can typically support 8 or 16 ports (though some 32-port modular switches are beginning to emerge). Sometimes modular switches are interconnected to create a fault-tolerant fabric.
What are director-class switches?
For larger SAN fabrics, director-class switches provide a larger port capacity (64 to 128 ports per switch) and built-in fault tolerance.
How is a SAN topology defined?
The type of SAN switch, its design features, and its port capacity all contribute its overall capacity, performance, and fault tolerance. The number of switches, types of switches, and manner in which the switches are interconnected define the topology of the fabric.
What are Cables?
SAN cables are special fiber optic cables that are used to connect all of the fabric components. The type of SAN cable and the fiber optic signal determine the maximum distances between SAN components, and contribute to the total bandwidth rating of the SAN.
Give detailson Storage components:-
The storage components of the SAN are the disk storage arrays and the tape storage devices.Storage arrays (groups of multiple disk devices) are the typical SAN disk storage device. They can vary greatly in design, capacity, performance, and other features.Tape storage devices form the backbone of the SAN backup capabilities and processes.Smaller SANs may just use high-capacity tape drives. These tape drives vary in their transfer rates and storage capacities. A high-capacity tape drive may exist as a stand-alone drive, or it may be part of a tape library.A tape library consolidates one or more tape drives into a single enclosure.
Tapes can be inserted and removed from the tape drives in the library automatically with a robotic arm. Many tape libraries offer very large storage capacities—sometimes into the petabyte (PB) range. Typically, large SANs, or SANs with critical backup requirements, configure one or more tape libraries into their SAN.
Storage Area Network Port Naming,SAN Port Naming:
What is a SAN(storage area network) port?
The points of connection from devices to the various SAN components are called SAN ports.All ports in a SAN are fibre channel ports.
What is a fabric port?Fabric ports are the SAN ports that serve as connection points to the switches,hubs, or routers that comprise the fabric of the SAN.
What is a node?How many ports will a node have?
Each component in a SAN — each host, storage device, and fabric component (hub,router, or switch) — is called a node, and each node may have one or more ports defined for it.
What is port naming?
Port naming is an convention to identify the ports.Ports can be identified in a number of ways:
1. PORT_ID
2. wwpn
3. Porttype_Portmode
What is Port_ID?
Within the SAN, each port has a unique Port_ID that serves as the Fibre Channel address for the port. This enables routing of data through the SAN to that port.
What is a WWPN?
A unique World Wide Port Name (WWPN) identifies each port in a SAN. The WWPN is a globally unique identifier for the port that allows certain applications to access it from outside the SAN.
What is a Porttype_Portmode naming convention?
It is a port naming convention, the port name consists of the type of port it is (that is, on which type of SAN component the port is physically located) and how the port is used (its logical operating mode).Using that convention, the port’s name can change as it goes in and out of use on the SAN.
Give example of porttype_portmode:-
An unused port on a SAN Fibre Channel switch is initially referred to as a G_Port. If a host server is plugged into it, the port becomes a port into the fabric, so it becomes an F_Port. However, if the port is used instead to connect the switch to another switch (an inter-switch link), it becomes an E_Port.
Basic Layers in a SAN(Storage Area Network)/Basic SAN components are as follows :
1) Host layer
2) Switch layer/fabric layer
3) Storage layer
How do different components communicate in a SAN(storage area network)?
1) Server communicates with the switch and searches for a drive to access data from
2) Switch sends an acknowledgement to server and provides information on port used to access the disk drive. Switch also sends the protocol to be used for subsequent communication. It could be SCSI(Small Computer System Interface) or Fiber Channel Protocol
3) Now the server sends information to SCSI drives in the port specified by switch
4) SCSI drives communicate with the physical disk drives and they access the information
5) Now the server receives the information it needs
The system administrator needs to see the list of users who have logged in and out on their
machines and the times when they did so. What should they do?
The last command searches the /var/log/wtmp file and displays who has logged on to the machine and the time
What is the most likely cause of a Solaris Server with PowerPath handling when there is a path failure?
Host HBA TimeOutValue set to 0 causes failure
What happens during zone failure?
The host cannot see the FA but the node list shows the HBA and FA logged into the switch
What is the procedure to copy the active zone set from SAN Manger ?
There is a copy automatically created in planned zone sets and nothing more needs to be done
Storage System NAS SAN DAS Architectures:
It defines how servers are connected to the storage array units.Popular storage architecture variants – DAS, NAS, SAN, iSCSI
DAS (Direct Attached Storage) – Name given to storage system that is directly attached to the host system. This is the most common storage architecture
NAS (Network Attached Storage) and NAS devices are specialized file servers optimized for serving storage, using a routable protocol (TCP/IP) over a LAN
SAN (Storage Area Network) is a specialized network, a communication infrastructure that provides physical connections and a management layer, access to high performance and highly available storage subsystems using block storage protocols. SAN is made of specific devices, such as HBA (Host Bus Adapters) in the host server, front-end adapters that reside in storage array
iSCSI – Internet Small Computer System Interface is a storage protocol based on TCP/IP. It encapsulates SCSI commands into TCP/IP packets and delivers it reliably over IP networks
What does Rule of 16 say?
Rule of 16 says that if we have 16 or fewer servers using SAN(Storage Area Network) is an expensive solution.
In such cases using NAS(Network Attached Storage) and iSCSI based solutions will be beneficial.
What is a SAN ?
A storage area network (SAN) is a specialized high-speed network of storage devices and computer systems (also referred to as servers, hosts, or host servers).
Currently,most SANs use the Fibre Channel protocol.
A storage area network presents shared pools of storage devices to multiple servers.
Each server can access the storage as if it were directly attached to that server. The SAN makes it possible to move data between various storage devices, share data between multiple servers, and back up and restore data rapidly and efficiently.
In addition, a properly configured SAN provides robust security, which facilitates both disaster recovery and business continuance.
Components of a SAN can be grouped closely together in a single room or connected over long distances.
This makes SAN a feasible solution for businesses of any size: the SAN can grow easily with the business it supports.
What are the components of SAN(storage area network)?/List the SAN (storage area network)Components:-
SAN(storage area network)is made of the following components :
1. SAN Switches
2. Fabric
3. Connections:HBA(Host Bus Adapters and Controllers)
SAN Switches :
Specialized switches called SAN switches are at the heart of the typical SAN. Switches provide capabilities to match the number of host SAN connections to the number of connections provided by the storage array.Switches also provide path redundancy in the event of a path failure from host server to switch or from storage array to switch.
Fabric :
When one or more SAN switches are connected, a fabric is created.
The fabric is the actual network portion of the SAN. A special communications
protocol called Fibre Channel (FC) is used to communicate over the entire network. Multiple fabrics may be interconnected in a single SAN, and even for a simple SAN it is not unusual to be composed of two fabrics for redundancy.
Connections: HBA and Controllers :
Host servers and storage systems are connected to the SAN fabric through ports in the fabric. A host connects to a fabric port through a Host Bus Adapter (HBA), and the storage devices connect to fabric ports through their controllers.
Are servers homogenous in SAN environment?
No.Servers need not be homogenous in SAN environment.Each server may host numerous applications that require dedicated storage for applications processing.
How does a SAN(storage area network) work?
The SAN (storage area network) components interact as follows:
1. When a host wishes to access a storage device on the SAN, it sends out a blockbased access request for the storage device.
2. The request is accepted by the HBA for that host and is converted from its binary data form to the optical form required for transmission on the fiber optic cable.
3. At the same time, the request is “packaged” according to the rules of the Fibre Channel protocol.
4. The HBA transmits the request to the SAN.
5. Depending on which port is used by the HBA to connect to the fabric, one of the SAN switches receives the request and checks which storage device the host wants to access.From the host perspective, this appears to be a specific disk, but it is actually just a logical device that corresponds to some physical device on the SAN. It is up to the switch to determine which physical device has been made available to the host for its targeted logical device.
6. Once the switch has determined the appropriate physical device, it passes the request to the appropriate storage device.
What is a fibre channel protocol(FCP)?
Fibre Channel Protocol (FCP) is a transport protocol (similar to TCP used in IP networks) which predominantly transports SCSI(small computer system interface) commands over Fibre Channel networks.
What are the various layers of fibre channel protocol/fibre channel?
Fibre Channel is a layered protocol. It consists of 5 layers, namely:
1. FC0 The physical layer, which includes cables, fiber optics, connectors,pinouts etc.
2. FC1 The data link layer, which implements the 8b/10b encoding and decoding of signals.
3. FC2 The network layer, defined by the FC-PI-2 standard, consists of the core of Fibre Channel, and defines the main protocols.
4. FC3 The common services layer, a thin layer that could eventually implement functions like encryption or RAID.
5. FC4 The Protocol Mapping layer. Layer in which other protocols, such as SCSI, are encapsulated into an information unit for delivery to FC2.
What is FC-PH?
FC0, FC1, and FC2 are also known as FC-PH, the physical layers of fibre channel. Where does the fiber channel router operate?
Fibre Channel routers operate up to FC4 level (i.e. they are in fact SCSI routers), switches up to FC2, and hubs on FC0 only.
What are the various speed at which we use fibre channel products?
Fibre Channel products are available at 1 Gbit/s, 2 Gbit/s, 4 Gbit/s, 8 Gbit/s, 10 Gbit/s and 20 Gbit/s. Products based on the 1, 2, 4 and 8 Gbit/s standards should be interoperable, and backward compatible. The 10 Gbit/s standard (and 20 Gbit/s derivative), however, is not backward compatible with any of the slower speed devices, as it differs considerably on FC1 level (64b/66b encoding instead of 8b/10b encoding). 10Gb and 20Gb Fibre Channel is primarily deployed as a high-speed "stacking" interconnect to link multiple switches.
What is an iFCP?
iFCP (Internet Fibre Channel Protocol) is an emerging standard for extending Fibre channel storage networks across the Internet. iFCP provides a means of passing data to and from Fibre Channel storage devices in a local storage area network (SAN) or on the Internet using TCP/IP. TCP provides congestion control as well as error detection and recovery services. iFCP merges existing SCSI and Fibre Channel networks into the Internet. iFCP can either replace or be used in conjunction with existing Fibre Channel protocols, such as FCIP(Fibre Channel over IP).
Why is iFCP superior to FCIP?
iFCP addresses some problems that FCIP does not. For example, FCIP is a tunneling protocol that simply encapsulates Fibre Channel data and forwards it over a TCP/IP network as an extension of the existing Fibre Channel network. However, FCIP is only equipped to work within the Fibre Channel environment, while the storage industry trend is increasingly towards the Internet-based storage area network. Because iFCP gateways can either replace or complement existing Fibre Channel fabrics, iFCP can be used to facilitate migration from a Fibre Channel SAN to an IP SAN or a hybrid network.
What is SAN island?
SAN (Storage Area Network) island is a storage network that acts as an independent unit. It can be compared against a LAN where PC's are networked and as a single integrated unit. WAN or Wide Area Network is formed by connecting two or more LAN. SWAN is formed by combining LAN+SAN (using same components for both purposes).
Basic architecture of SAN : Storage Array (set of disks) connected via network fabric(switches) to End clients.
When a SAN acts as an independent functional unit serving specific purpose it becomes a SAN island. It is common in IT industries to have separate infrastructure for Mainframe system and opensource software(UNIX,LINUX,NT Windows, Windows Servers). In such cases SAN islands can be useful. SAN islands can also be used to serve different IT departments/business units within an organization that function as separate entities(virtual separation). SAN can be also used to share data across systems. This is very useful in case of disaster recovery

UNIX system administrator day to day job duties

 Have you ever wondered what is expected out of a UNIX (or) Linux system administrator as they call it variously on day to day basis. Here are the job duties and responsibilities of UNIX/linux/AIX and all flavors of UNIX/Linux system administrator:

1) Apply systems analysis techniques to determine functional specifications to meet system and networking business needs - Typically UNIX/linux admin will be part of many infrastructure meetings starting with capacity planning wherein project needs are discussed in detail. This is the functional specification gathering point that they need to translate to server needs. Some firms make use of capacity planning software that can be an excel developed in-house or a third party application in which the linux admins are given appropriate access. They need to input the server specifications needs starting from project initiation phase including requirement on size and type of servers, operating systems to be installed in servers, disk configurations to be made etc
As of latest trend, all the major firms are evaluating the possibility of deploying their servers in cloud infrastructures like amazon web services, google cloud platform GCP, microsoft azure, rackspace, alibaba cloud, Oracle Cloud Infrastructure popularly called OCI etc as cost cutting measure that makes infrastructure design and deployment done in form of Infrastructure as a service (IaaS - Infrastructure as Service ) rather than traditional on-site datacenter model. Henceforth, linux admins can start equipping themselves by preparing for AWS certified solutions architect associate level certification exam with sysops specialization focussing on AWS system administration without any delay
2) Design, develop, document, create, test, and modify system programs to meet enterprise needs
3) Highly skilled and proficient in theoretical and practical application of highly specialized information to server
4) Establish and manage server and monitoring infrastructure - monitoring tools like HP glance, linux level command line programs like sar, top, cpustat, vmstat, iostat are used extensively on day to day basis
5) Configure UNIX servers that can be linux flavors like RHEL, oracle linux, ubuntu linuc, HP-Unix, solaris etc
6) Configure, build and deploy applications and patches to the servers - This is a major job that takes much of the admins time. Now, with cloud infrastructure like AWS in place, this can been off-loaded to AWS team
7) Monitor server resources such as CPU, IO, and Disk to understand current resource requirements anticipate growth needs using server tools like top, sar, vmstat, cupstat, memstat
8) Must have command of basic advanced TCP/IP networking concepts - Though the system admin does not need to know lot of networking basic commands like ifconfig, stacktrace, ping etc come handy when server is inaccessible. same does apply in cloud infrastructure as well
9) Troubleshoot networking issues on both Linux servers, Solaris servers, Unix flavored machines. Additional knowledge is needed if server happens to be AIX the IBM based linux machines
10) Setup and manage SAN/NAS systems as well as backups. Most mid level enterprises dont have storage team as well as storage admins. Linux administrator will need to take care of NAS/SAN storage as well as backup infrastructure. The storage overhead will be 100% eliminated once the project is 100% deployed in cloud. However, with first project firms prefer to retain data in local datacenter as well as in AWS
11) Setup and manage Linux Server administration tasks including fixing broken disks by running fsck the disk check commands, create users, groups, grant roles appropriately, manage reports from monitoring infrastructure, perform backup restore and recovery of servers etc
12) Plan and test business continuity in form of disaster recovery testing on quarterly basis
13) Revoke access from user accounts once user leaves organization. Unix admins will have regular emails from HR department on this
14) Maintain and mange test, QA, UAT, DEV servers in addition to production servers
15) Work closely with dba's and grant them access to storage LUN's on as needed basis. Also, some database upgrades, patching demand server reboot. Linux admins are involved in these tasks