EP1 - Weekend Exam Cram : AZ-305 | 2025 – Ace the exam with Practice Questions & Expert Tips #az305
By Tech with Jaspal
Summary
## Key takeaways - **Entra ID App Registration for SSO**: To ensure users can connect to app one without being prompted for authentication, recommend an Entra ID app registration. By using an Entra ID app registration, you can configure single sign on which allows users to authenticate once and access multiple applications without being prompted for credentials again. [01:05], [01:15] - **Conditional Access for Compliant Devices**: To ensure users can access app one only from company owned computers implement conditional access policies in Entra ID. This way you can enforce policies that restrict access to app 1 to devices that are marked as compliant or joined to the Entra ID. [01:57], [02:08] - **Log Analytics + Agent for Central Monitoring**: Create a log analytics workspace in Azure monitor for central monitoring of warning events in system logs of 300 virtual machines. Install the Azure monitor agent on the virtual machines as it collects performance metrics, logs, and custom data from your virtual machines and sends them to the log analytics workspace. [02:48], [03:47] - **Dynamic Data Masking for PII Protection**: Include dynamic data masking in your solution to ensure that only privileged users can view personally identifiable information in Azure SQL database. Dynamic data masking helps prevent unauthorized access to sensitive data by hiding the sensitive data in the result set of a query over designated database fields while the data in the database is not changed. [04:55], [05:11] - **Password-Based SSO for Legacy Apps**: For applications that manage their own credential store and require username and password authentication without supporting identity providers, use password-based SSO. This method is appropriate when the application does not support SAML, OpenID Connect, or other federated protocols as shown in the Microsoft SSO configuration flowchart. [07:44], [09:00] - **System Assigned Identity Minimizes Effort**: System assigned managed identities are the correct answer for Azure functions reading activity logs because they provide a secure automatically managed identity eliminating the need for manual credential management and reducing administrative effort. When you enable a system assigned managed identity, a service principle is created in Microsoft Entra ID for the identity and is tied to the life cycle of that Azure resource. [12:46], [13:42]
Topics Covered
- Full Video
Full Transcript
Friends, this is first question of our AZ 3005 exam practice question series. You plan to deploy an Azure web
series. You plan to deploy an Azure web app named app one that will use Microsoft Entra ID authentication. App
one will be accessed from the internet by the users at your company. All the
users have computers that run Windows 11 and are joined to enter ID. You need to recommend a solution to ensure that the users can connect to app one without
being prompted for authentication and can access app one only from company owned computers. Make a note they can
owned computers. Make a note they can only access it from companyowned computers. What should you recommend for
computers. What should you recommend for each requirement? And the first
each requirement? And the first requirement is the users can connect to app one without being prompted for authentication. Your options are an
authentication. Your options are an Entra ID managed identity, an Entra id app registration, Entra ID application
proxy and friends to ensure that users can connect to app 1 without being prompted for authentication, you should recommend an enter ID app
registration. By using an enter ID app
registration. By using an enter ID app registration, you can configure single sign on which allows users to authenticate once and access multiple
applications without being prompted for credentials again. Since the users
credentials again. Since the users computers are already joined to enter ID, they can leverage SSO for seamless access to app 1. Now the next
requirement was the users can access app one only from company owned computers.
Your options are intra ID administrative unit, Azure policy, conditional access policy, Azure application gateway and
folks to ensure users can access app one only from company owned computers you should implement conditional access policies in enter ID. This way you can
enforce policies that restrict access to app 1 to devices that are marked as compliant or joined to the enter ID.
Let's look at next question. You have an HGR subscription
question. You have an HGR subscription that contains 300 virtual machines that run Windows Server 2022. You need to centrally monitor all
2022. You need to centrally monitor all warning events in the system logs of the virtual machines. What should you
virtual machines. What should you include in the solution? Now your first part is resources to create in Azure and your options are a storage account, a
search service, an event hub, a log analytics workspace. Friends, things you would
workspace. Friends, things you would need to do to achieve the solution of central monitoring is to create a log analytics workspace.
A log analytics workspace in Azure monitor is the appropriate solution for collecting and analyzing data from multiple sources including the system
logs from virtual machines. By creating
a log analytics workspace, you can configure the virtual machines to send their system logs to this workspace where you can then query and monitor for
warning events centrally.
Now the next part is configuration to perform on the virtual machines. What
would you do on the virtual machines to achieve this configuration? And your
options are configure continuous delivery, install the Azure monitor agent, modify the membership of the
event log readers group, create event subscriptions. And folks, the correct
subscriptions. And folks, the correct answer here would be to install the Azure monitor agent on the virtual machines. Azure monitor agent collects
machines. Azure monitor agent collects performance metrics, logs, and custom data from your virtual machines.
Installing this agent is essential to gather the system logs and send them to the log analytics workspace. Now the next step that you
workspace. Now the next step that you would follow in this setup would be to configure data collection to send the system logs to the log analytics workspace and then create queries and
alerts within the log analytics workspace to monitor the warning events in the system logs. Next question. You plan to deploy
logs. Next question. You plan to deploy an Azure SQL database that will store personally identifiable information PII.
You need to ensure that only privileged users can view the PII. What should you include in the solution? Your options
are transparent data encryption which is TDE, dynamic data masking, data discovery and classification, rolebased
access control which is arbback. You should include dynamic data
arbback. You should include dynamic data masking in your solution to ensure that only privileged users can view
personally identifiable information in Azure SQL database.
Now, dynamic data masking helps prevent unauthorized access to sensitive data by enabling customers to designate how much
of the sensitive data to reveal with minimal effect on application layer.
It's a policy based security feature that hides the sensitive data in the result set of an query over designated database fields while the data in the
database is not changed.
Let's now look at why other options are incorrect. Now option A TD encrypts the
incorrect. Now option A TD encrypts the entire database at rest. It protects
against unauthorized access to the physical files and helps secure the database from threats like disk theft.
However, TD does not control who can view the data once they have access to the database through legitimate means.
Now, option C, data discovery and classification is a tool for identifying and classifying sensitive data within your database. It helps you understand
your database. It helps you understand where your PII is located and categorize it appropriately. However, it doesn't
appropriately. However, it doesn't enforce any access controls or masking on its own. Option D, role-based access control
own. Option D, role-based access control is used to manage access at cloud plane level folks. And here we are talking
level folks. And here we are talking about access at data plane level. So
again an incorrect choice. Friends, I
hope now you understand why option A, C and D are incorrect. But if you still have any doubts, please post them in the comment section and I'll try to address them as soon as
possible. Fourth
possible. Fourth question. You have an application that
question. You have an application that is used by 6,000 users to validate their vacation requests. The application
vacation requests. The application manages its own credential store. Users
must enter a username and password to access the application. The application
does not support identity providers. You
plan to upgrade the application to use single sign on authentication by using a Microsoft intra application registration. Which SSO method should
registration. Which SSO method should you use? And your options are SAML,
you use? And your options are SAML, header based, open id connect and passwordbased. Friends, key to answer
passwordbased. Friends, key to answer this question is users must enter a username and password to access the application and application manages its own credential store. So for that very
reason, you will need to use passwordbased SSO in this case. Let's head to Microsoft
case. Let's head to Microsoft documentation to understand this in more detail.
So friends, there are several ways you can configure an application for SSO.
Now, choosing an SSO method depends on how the application is configured for authentication. Let's go through this
authentication. Let's go through this flowchart to understand how and what parameters you need to take into consideration when you are deciding the
SSO method for your application.
So folks, if you are developing a new application, yes, you should start looking at open id connect and ooth. But
if you are not developing a new application, then comes are you ready to configure single sign on for an existing app? If you are then yes, is your app in
app? If you are then yes, is your app in the cloud or onremise? If it is in the cloud then you need to think about
whether my application supports a sample based protocol or not. If it supports then you will choose sample based authentication. If it does not support
authentication. If it does not support then you will see whether your app authenticates with username or password.
If it supports then you can see you will choose passwordbased authentication. If
it doesn't support you will disable it.
Before looking at other parts of this flowchart, let's look at what you will do if your existing application is a onremisebased application. Now if your
onremisebased application. Now if your application is a on-premise-based application, again you will see whether it supports SLbased protocol. If it
supports then you will choose SL based authentication. And if it does not
authentication. And if it does not support sample based authentication then you will choose or basically you will see whether your app can authenticate
with IWA. If it supports IWA then you
with IWA. If it supports IWA then you will choose IWA and if it does not support IWA then you will check whether your app supports
authentication with HTTP headers. If it
supports HTTP headers then you will choose header based and if it does not support then the last option is you will need to see whether your app can
authenticate with username and password.
If it supports you will choose passwordbased. If it does not then
passwordbased. If it does not then basically your authentication will be disabled.
So folks, this is uh a very clear flowchart telling you what are the parameters you need to take into consideration when deciding which of the
authentication method you need to configure for SSO of your application.
Now the key thing that you can get from this configuration flowchart is if you are developing a new application then the recommendation is open id correct
and o but if your application is like already existing uh so there are different parameters that you need to take into consideration if it is a
cloud-based application then the first choice is s authentication which you would have seen at many places uh in your organization But if the application doesn't support
sample based authentication then the next option is to choose password based authentication. Now if the application
authentication. Now if the application is onremise then basically again the first preference is sample based authentication or the second preference is your any of your identity provider
basically you integrate it with your identity provider and then the next one is header based and finally I would say the passwordbased authentication is the
last option that that you should look for because even though you will configure it as an SSO but it won't be a true SSO because you you will have to
input your uh username and password every time. However, the key requirement
every time. However, the key requirement in this question was that the application does not support identity providers. So, IWA was out of picture
providers. So, IWA was out of picture and then uh the application had its own credential store and the one of the requirement was users must enter a
username and password to access the application. That is the reason we have
application. That is the reason we have chosen passwordbased authentication.
So friends, I hope you now understand why option D, password based, is the correct answer here. Question number
five, you are developing an app that will read activity logs for an Azure subscription by using Azure functions. You need to recommend an
functions. You need to recommend an authentication solution for Azure functions. The solution must minimize
functions. The solution must minimize administrative effort. What should you
administrative effort. What should you include in this recommendation? And your
options are an enterprise application in entra id application registration in entra ID system assigned managed
identities shared access signatures or SAS. System assigned managed identities
SAS. System assigned managed identities are correct answer here because they provide a secure automatically managed identity for HR functions eliminating
the need for manual credential management and reducing administrative effort. They integrate seamlessly with a
effort. They integrate seamlessly with a services ensuring secure access to resources like activity log without the need for additional configuration or
secret handling.
Now folks again let's head to Microsoft documentation to understand the different types of managed identities. So folks there are basically
identities. So folks there are basically two types of managed identities. One is
the system assigned and the other one is user assigned. Now some Azure resources
user assigned. Now some Azure resources such as virtual machine allow you to enable a managed identity directly on the resource which is what is system
assigned. When you enable a system
assigned. When you enable a system assigned managed identity, a service principle of a special type is created in Microsoft enter ID for the identity.
The service principle is tied to the life cycle of that Azure resource. When
the Azure resource is deleted, Azure automatically deletes the service principle for you. By design, only that Azure resource can use this identity to
request tokens from Microsoft Intra ID.
You authorize the managed identity to have access to one or more services. The
name of the system assigned service principle is always same as the name of the Azure resources it is created for.
For a deployment slot, the name of its system managed identity is app name/ slot/ slot name. So folks,
I hope you now understand what is system assigned managed identity. Let's now
look at user assigned managed identity.
You may also create a managed identity as a standalone Azure resource. You can
create a user assigned managed identity and assign it to one or more Azure resources. When you enable a user
resources. When you enable a user assigned identity, a service principle of a special type is created in Microsoft Entra ID for the identity. The
service principle is managed separately from the resource that uses it. This is
the key difference between the system assigned and the user assigned. System
assigned is assigned to that particular resource and gets deleted when that resource is deleted. But the user assigned is managed separately and it doesn't get deleted when the resource
get deleted. Basically user managed
get deleted. Basically user managed identity can be used by multiple resources. You authorize the managed
resources. You authorize the managed identity to have access to one or more services.
So folks, you can go through this documentation uh whose link was already on your screen if you want to understand more about system assigned managed
identity and user assigned managed identity. I would highly recommend you
identity. I would highly recommend you to understand these topics in more detail as there can be a lot of variation of just the example questions
that I am bringing in front of you.
Sixth question of the series. You have
an Azure subscription that contains thousand resources. You need to generate
thousand resources. You need to generate compliance reports for the subscription.
The solution must ensure that the resources can be grouped by department.
What should you use to organize the resources? Your options are Azure policy
resources? Your options are Azure policy and tags, application groups and kotas, resource groups and role assignments, administrative units, and Azure
lighthouse. Friends, this is one of the
lighthouse. Friends, this is one of the easiest questions that you will get in Azure solution architects expert exam.
To generate compliance report and ensure resources can be grouped by department, you should use Azure policy and tags.
Friends, I think this question is more applicable to ASA 900 exam. It is so simple. It is just the basic concept
simple. It is just the basic concept that everyone who is working with Azure should know. Now using Azure policy and
should know. Now using Azure policy and tags you can apply tags to resources to group them by department and you create policies to enforce tagging rules
ensuring all resources have the necessary tags for proper categorization and compliance reporting. Next question. You have an
reporting. Next question. You have an Azure subscription that contains an Azure blob storage account named store one. You have an onremise file server
one. You have an onremise file server named server 1 that runs Windows Server 2022. Server one stores 500GB of company
2022. Server one stores 500GB of company files. You need to store a copy of the
files. You need to store a copy of the company files from server one in a store one. Which two possible Azure services
one. Which two possible Azure services achieve this goal? And your options are an Azure logic apps integration account,
an Azure import export job, an Azure analysis services onremise data gateway, an Azure batch account, Azure data
factory. Folks, the two services that
factory. Folks, the two services that you can use to achieve the goal of copying the files from an on-premise server to Azure storage account are
import export job and Azure data factory. Now, Azure import export job
factory. Now, Azure import export job allows you to transfer large amounts of data to Azure blob storage by shipping hard drives to an Azure data center.
This method is efficient for transferring large data sets like the 500GB of company files. And Azure data factory is a data integration service
that enables you to create a schedule and orchestrate data workflows. You can
use Azure data factory to create a pipeline that copies files from your on-premise server to the Azure blob storage account. You have an Azure subscription.
account. You have an Azure subscription.
The subscription has a blob container that contains multiple blobs. Multiple
users in the finance department of your company plan to access the blobs during the month of April. You need to recommend a solution to enable access to
the blobs during the month of April only. Which security solution should you
only. Which security solution should you include in the recommendation? And your
options are conditional access policies, certificates, shared access signatures, access keys. You should include shared access
keys. You should include shared access signature in your recommendation.
Shared access signatures, also called as SAS, allow you to grant limited access to Azure storage resources such as blobs for a specified period. In this case,
you can create a SAS token that provides access to the blobs in the container for the month of April. The SAS token can be configured to expire at the end of
April, ensuring that access is automatically revoked after the specified period. This approach is
specified period. This approach is secure and ensures that access is time bound meeting the requirement of enabling access only during
April. Folks, let's head to Azure portal
April. Folks, let's head to Azure portal to understand how you can create a shared access signature token and how you can configure these timebased settings. And there are more settings
settings. And there are more settings that you can control. So friends, there are two ways
control. So friends, there are two ways to generate a SAS token.
Either you can generate a token which gives them access at storage account level or you can even go a further level down and you can create a SAS token which gives them access at container
level. So let's first try to create a
level. So let's first try to create a SAS token at account level and see what all properties are available. So to
create an uh a SAS token at storage account level in the left blade if you scroll down you will see an option shared access signatures.
Now once you are into that menu then uh you get a bunch of options basically you can see you can choose which all services are allowed uh and the options
available are blob file Q or table. So
what it tells you is you can limit access to a particular type of service using the shared access signature. For
example, you want to give your consumers or customers just access to manage blob storage or Q storage or table storage.
So you can do that level of segregation with SAS token. So for this instance, let's just choose blob. Now allowed
service type, you can choose whether you want to give them at an access at service level or you need to give them access at container level or you need to
give them access at blob level. For this
use case, let's select service. Now then
comes the next part which is the allowed permission. Shared access signature does
permission. Shared access signature does it just give them readonly permission or it gives them write, delete, list, immutable storage, update, permanent
delete. These are different type of
delete. These are different type of permissions that are available in uh in the storage account. So uh B you you should always follow the principle of
least privileged access. So uh if you are creating a SAS token for a customer who just needs to read blob so the allowed permissions should just be read
or maybe you could give them access to upload which is like write access uh and not give them delete. So based on your requirements you have to choose the
properties here. Now the next part is
properties here. Now the next part is blob ver versioning permissions enable deletion of versions then allow blob index permissions. So there are certain
index permissions. So there are certain permissions that you can uh uh choose based on your requirement. Then comes
the part which was being talked about in our question start and expiry date. So
you can set a certain uh period uh during which this SAS token will work.
So for example uh you want this token to uh to to get active after 10 days. So
maybe you could choose uh that particular date. Obviously I'm getting a
particular date. Obviously I'm getting a warning because my start date is uh before the end date. So if I change my end date also the warning goes away. Now
you can choose a particular time as well on when this uh this SAS token becomes active and on when this SAS token gets deactivated. So the question was talking
deactivated. So the question was talking about granting access just throughout the uh throughout the month of April. So
that's how you can limit it to a particular duration and you just don't need to wait for April to start and then generate the token and then you can
choose the time zone as well. Uh now the next part is allowed IP addresses. So
for example you uh want to restrict access to just a subset of users maybe within your organization. So you can have your uh IP range of your internal
virtual networks here specified so that the token just works from inside the organization. For example, if the token
organization. For example, if the token gets leaked and someone tries to access your uh blobs over the internet, but if you have the IP address restriction
already specified, uh their requests will be denied even though the blob had uh read or write access. Now then you have your option to specify the allowed
protocols. The recommendation is just to
protocols. The recommendation is just to use HTTPS but you have option to choose HTTPS and HTTP as well. Preferred
routing tier tier is more around like uh uh if you are using the uh Microsoft backbone and those sort of things. So
you can change these tiers and then uh you need to choose the signing key. Uh
basically there are two signing keys that get created by default. uh you can go ahead and create others as well uh the custom keys uh but for now I'll just
keep uh signing key uh one and then generate SAS and connection string here the connection string has got generated the SAS token has got
generated and the blob service SAS URL has also got generated so you could probably share this URL with your uh customers or consumers and they should
be able to access these storage accounts account using this URL. Um, basically
the U the blob SAS URL is just a combination of SAS token and connection string. So you do not really need to uh
string. So you do not really need to uh share them separately. Uh if you are just uh sharing the uh SAS URL. Now this
is how you generate a SAS token which gives them access at the storage account level. Here you do have option if you
level. Here you do have option if you want to limit it to containers, you can select container as well. Uh but if you just want to limit it to let's say you
have 10 containers in a storage account and you want to limit it to just one of them. So the way you do it is you need
them. So the way you do it is you need to go into your containers and then uh for example I just have a logs container at the moment. Uh I need to uh grant
this logs container access to a customer. So you need to click on the
customer. So you need to click on the three dots on the right hand side of the container and select generate SAS. Now
you can see the difference here. The
options are a bit less because now the SAS token that you are creating is pretty much restricted to this particular container only. signing
method. Again, you can either use the uh account key which uh which we briefly discussed uh when we were creating the um uh SAS token at container level or
you do have option to create user delegation keys as well. I don't have access to my account at this point.
Let's switch to account key then again you can see both the keys. Uh then you do have option to use stored access policies as well. I have not created any stored access policy but if you want you
can create. Now the next part is again
can create. Now the next part is again the pretty standard options that you could see the permissions that your uh SAS token needs to have. The the all all the options are standard here just the difference is that earlier you were
creating a SAS token which was giving access at the account level. Now you are creating a SAS token which is giving them access at the container level. So
folks, I hope you now understand how we create SAS token and you have more context around how you need to approach whenever you have such questions in the
exam. So multiple questions can be
exam. So multiple questions can be formed uh to which which would be revolving around SAS token. In this in this question we were talking about a particular time frame. Then we could
also uh say that the requirement could be that uh the access needs to be granted from a subset of IPs. you know
we can do that as well here by using the allowed IPs. Uh maybe the question could
allowed IPs. Uh maybe the question could be that the allowed protocol is just HTTPS and not HTTP. So again you can you have the option to um implement that
limitation as well here. Let's head back to our questions.
here. Let's head back to our questions.
Now question number nine. You have
several Azure app service web apps that use Azure key wault to store data encryption keys. Several departments
encryption keys. Several departments have the following requests to support the web app. Which service should you recommend for each department's request?
Now the first department is quality assurance department and the requirement is require temporary administrator access to create and configure
additional web apps in the test environment and your options are intra ID privileged identity management Azure
managed identity intra ID connect intra ID identity protection and folks the correct answer here would be option A inra RA ID
privileged identity management also called as inra pim allows you to manage control and monitor access within entra ID including providing just in time
privileged access to Azure resources with temporary elevation for specific roles like administrator. This fits the requirement for temporary access. Now
let's look at the requirement of next department which is development department and the requirement from development department is enable the applications to access key vault and
retrieve keys for use in code and again you have the same four options. Now to
satisfy this requirement you should use Azure managed identity. Azure managed
identity is specifically designed to provide Azure services like app service virtual machines with an automatically managed identity in Android ID allowing
them to authenticate against Azure services like keywalt without the need to manage credentials in your code. Now let's look at the requirement
code. Now let's look at the requirement from a third department which is security department. Review the
security department. Review the membership of administrative roles and require users to provide a justification for continued membership. Get alerts
about changes in administrator assignments. See a history of
assignments. See a history of administrator activation. Again, you have all those
activation. Again, you have all those four options. And folks, you should recommend
options. And folks, you should recommend Enra ID PIM in this case as well. Now,
Enra ID PIM is designed for managing, controlling, and monitoring privileged access to Azure AD roles. And it allows you to review and manage the membership
of administrative roles, require justification for continued role membership, receive alerts on changes in administrator assignments, and view a
history of role activation and changes. And this is the reason why
changes. And this is the reason why enter ID PIM is perfect to satisfy the requirement of security department. Let's look at next
department. Let's look at next question. You plan to deploy an app that
question. You plan to deploy an app that will use an Azure storage account. You
need to deploy the storage account. The
storage account must meet the following requirements. Store the data for
requirements. Store the data for multiple users. Encrypt each user's data by using
users. Encrypt each user's data by using a separate key. Encrypt all the data in the storage account by using customer managed
keys. What two options can you deploy?
keys. What two options can you deploy?
And your options are blobs in a generalpurpose v2 storage account, files in a premium fileshare storage account,
blobs in an Azure data lake storage zen2 account, files in a generalpurpose v2 storage account. Folks, two most suited options
account. Folks, two most suited options for the given requirement would be option A and option C. Generalpurpose V2
storage accounts can store data for multiple users effectively handling large scale applications. It allows for encryption at the blob level which means
you can encrypt individual blobs with different keys managed via Azure key and it supports encryption with customer managed keys from Azure key volt giving
you control over encryption. Now, Azure data lake storage
encryption. Now, Azure data lake storage zen2 is specifically designed for big data analytics workloads with a hierarchal name space which might be
advantageous if you need fine grained control over data organization and access particularly in analytics
scenarios. It supports a hierarchal name
scenarios. It supports a hierarchal name space enabling more efficient data management especially in scenarios involving complex data structure. So
friends, if your focus is purely on blob storage with no specific need for hierarchal namespace or big data analytics, blobs in a general purpose v2 storage account is indeed a very
suitable and flexible choice. If your
workload involves complex data structure or big data analytics then Azure data lake storage zen2 could provide more
benefits. So if there is any more
benefits. So if there is any more requirement added to the three requirements which are already mentioned then you might have to drill down to
just one option. But in our case the requirement is a bit generic. So option
A and option C both satisfy the requirement.
Next question. You are designing an Azure
question. You are designing an Azure governance solution. All Azure resources
governance solution. All Azure resources must be easily identifiable based on the following operational information which
is environment, owner, department and cost center. You need to ensure that you
cost center. You need to ensure that you can use the operational information when you generate reports for the Azure resources. What should you include in
resources. What should you include in the solution? Your options are an Azure
the solution? Your options are an Azure data catalog that uses the Azure REST API as a data source, Microsoft Intra ID
administrative units, an Azure management group that uses parent group to create a hierarchy, an Azure policy that enforce tagging rules.
Friends, this is again one of the simplest question that you can get in AZ 3005 exam. And to ensure that all
3005 exam. And to ensure that all resources are easily identifiable based on operational information like environment, owner, department and cost
center. And to use this information for
center. And to use this information for generating reports, you should apply Azure policy that enforces tagging values and rules.
This is similar to a question we covered in last part of the series as well. So
if you still have any doubts then please post in the comment section and I'll try to address them as soon as possible. You have an entry id tenant
possible. You have an entry id tenant named ktoso.com that has a security group named group one. Group one is configured for assigned membership.
Group one has 50 members including 20 guest users. You need to recommend a
guest users. You need to recommend a solution for evaluating the membership of group one. The solution must meet the following requirements. Now your
following requirements. Now your requirements are the evaluation must be repeated automatically every 3 months.
Every member must be able to report whether they need to be in group one.
Users who report that they do not need to be in group one must be removed from group one automatically.
Users who do not report whether they need to be in group one must be removed from group one automatically. What should you include
automatically. What should you include in the recommendation? Your options are change the membership type of group one
to dynamic user. Implement Intra ID identity protection. Implement Intra ID
identity protection. Implement Intra ID privileged identity management. create
an access review. And friends, you should create
review. And friends, you should create an access review to meet the specified requirements. Let's understand how each
requirements. Let's understand how each of these requirements is satisfied by using access reviews. Now, access reviews in infra ID
reviews. Now, access reviews in infra ID can be configured to repeat automatically such as every 3 months, which satisfies the automatic reevaluation requirement. Users can be prompted
requirement. Users can be prompted during the review to indicate if they need to remain in the group. This allows
every member to report their need to stay in group which is also called as self-reporting. Now users who either do
self-reporting. Now users who either do not respond to the review or indicate they do not need to be in the group can be removed automatically based on the
review result which satisfies the last two requirement of automatic removal. Next question friends. You have
removal. Next question friends. You have
an Azure subscription.
You plan to deploy a monitoring solution that will include the following. Azure
monitor network insights, application insights, Microsoft Sentinel VM insights. The monitoring solution will
insights. The monitoring solution will be managed by a single team. What is the minimum number of Azure monitor log analytics workspaces required? Your
options are 1 2 3 and four.
Folks, the minimum number of log analytics workspace required in this case would be one. Reason being all the services
one. Reason being all the services mentioned here which is Azure monitor, network insights, application insight, Microsoft sentinel and VM insights can
send their log and metrics to a single log analytics workspace. Next question folks, you have
workspace. Next question folks, you have an AGR subscription. You need to deploy a relational database. The solution must meet the following requirements. Support
multiple read only replicas.
Automatically load balance readonly requests across all the readonly replicas. Minimize administrative
replicas. Minimize administrative effort. What should you use? Now the
effort. What should you use? Now the
first part of the question is you need to tell which of the following service you need to choose. a single Azure SQL database and Azure SQL database elastic
pool Azure SQL managed instances. Now correct option here is
instances. Now correct option here is option A, Azure SQL database as it supports active geo replication and auto failover groups
which allow for up to four read only replicas and these read only replicas can automatically handle and balance
read only workloads. The service is fully managed requiring minimal manual intervention. Let's now look at next
intervention. Let's now look at next part of this question. Service tier.
Your options are business critical, hypers scale or premium. Now friends, in premium and business critical service tiers only one of the readonly replica
is accessible at any given time. Hypers
scale supports multiple readonly replicas which is one of the requirements. So option B is the correct
requirements. So option B is the correct choice in this case. There is a link on your screen at the moment. If you want to read more about these scaling out
things in Azure SQL then go through the link. Your company has the division
link. Your company has the division shown in the following table. You
basically have two divisions named East and West. East is hosted in sub one
and West. East is hosted in sub one subscription and west is hosted in sub2 subscription. Now east division uses a
subscription. Now east division uses a intra id tenant named devopshhub.com and west division uses a intra id tenant named
techwithjaspal.com. Sub one contains an
techwithjaspal.com. Sub one contains an Azure app service web app named app one.
App one uses intra ID for single tenant user authentication. Users from
user authentication. Users from devopshhub.com can authenticate to app one. You need to recommend a solution to
one. You need to recommend a solution to enable users in the techwidjaspal.com domain to authenticate to app one. What should you recommend?
Your options are configure the enter ID provisioning service, configure enter ID join, configure supported account types
in the application registration and update the sign-in endpoint, enable enter ID pass through authentication and update the sign-in endpoint.
Friends, to achieve this, first you will have to configure supported account types in the application registration and then update the sign-in endpoint, which is option
C. Next question. You are designing an
C. Next question. You are designing an app that will be hosted on Azure virtual machines that run Ubuntu. The app will
use a third-party email service to send email messages to the users. The
third-party email service requires that the app authenticate by using an API key. You need to recommend an Azure
key. You need to recommend an Azure keywalt solution for storing and accessing the API key. The solution must minimize administrative effort. What
should you recommend using to store and access the key? Now for storage, you have three options basically.
Certificate, key, and secret.
Folks, API keys are typically stored as secrets in Azure key wault. The Azure
key wault can store and manage secrets like API keys, passwords or database connection strings. Now the next part of
connection strings. Now the next part of the question is access whether it needs to be an API token, a managed identity or a service principle. And folks, you
should use managed identity for the virtual machines.
Managed identities allow your app running on the virtual machine to securely access Azure key vault without needing to manage credentials
manually. Question number
manually. Question number 17. You have an AGR subscription that
17. You have an AGR subscription that contains 10 web apps. The apps are integrated with enter ID and are accessed by users on different project
teams. The users frequently move between projects. You need to recommend an
projects. You need to recommend an access management solution for the web apps. The solution must meet the
apps. The solution must meet the following requirements. Now your
following requirements. Now your requirements are the users must only access to the app of the project to which they are assigned
currently. Project managers must verify
currently. Project managers must verify which users have access to their projects app and remove users that are no longer assigned to their project.
Once every 30 days, the project managers must be prompted automatically to verify which users are assigned to their projects. What should you include in the
projects. What should you include in the recommendation? And your options are
recommendation? And your options are Intra ID identity protection, Microsoft Defender for Identity, Microsoft Intra
Permissions Management, Intra ID identity governance. The best recommendation for
governance. The best recommendation for this scenario is Entra ID identity governance. Now, Entra ID identity
governance. Now, Entra ID identity governance includes features like access reviews which allow project managers to verify user access to apps regularly.
This meets the requirement of prompting project managers every 30 days to review and manage user access.
Now role assignment in enter ID helps ensure that users only have access to the resources they currently need based on their role or project assignment.
This aligns with the requirement for users to access only the app of the project they are currently assigned to.
And utilizing automated processes in enter ID can automate the process of reviewing and updating access permissions making it easier for project
managers to maintain proper access controls without manual intervention. Let's look at next
intervention. Let's look at next question folks. You have an Azure subscription
folks. You have an Azure subscription that contains 50 Azure SQL databases.
You create an Azure resource manager template named template one that enables transparent data encryption which is TDE. You need to create an Azure policy
TDE. You need to create an Azure policy definition named policy 1 that will use template one to enable TDE for any
non-compliant Azure SQL databases. How
should you configure policy one? Now
there are different parts of this question. The first one is set available
question. The first one is set available effects to your options are deploy if not exists, enforce reg policy
modify. Folks, you will need to set
modify. Folks, you will need to set available effects to deploy if not exists. A deploy if not exists policy
exists. A deploy if not exists policy definition executes a template deployment when the condition is met.
Now the next part of the question is include in the definition. Your options
are the identity required to perform the remediation task, the scope of the policy assignment, the role-based access control arbback roles required to
perform the remediation task. Friends, you will need to include
task. Friends, you will need to include the identity required to perform the remediation task in the policy definition.
managed identity is necessary for the policy to execute the ARM template and apply the remediation. Now the other two options which is the scope of the policy
assignment and the role-based access control roles required to perform the remediation task are typically specified when assigning the policy not within the
policy definition itself. So you need to understand what is included in the policy definition and what is used when you are assigning the policy. You need
all the three options but the difference is option A is required in the policy definition and option B and option C are required when you are assigning the policy. Now the the scenario that is
policy. Now the the scenario that is being talked about in this question is clearly documented on the Microsoft website as well. There is the link on your screen at the moment. Go through
this link to understand how you can implement this scenario in real world. Next
world. Next question. You need to design a storage
question. You need to design a storage solution for an app that will store large amounts of frequently used data.
The solution must meet the following requirements. Your requirements are
requirements. Your requirements are maximize data throughput, prevent the modification of data for one year, minimize latency for read and write
operations. Which Azure storage account
operations. Which Azure storage account type and storage service should you recommend? Now the first part of the
recommend? Now the first part of the question is talking about storage account type and your options are blob storage block blob storage file
storage version two with premium performance storage version two with standard performance and folks the storage account that you should recommend in
this case is block blob storage.
Block blob storage is designed for workloads that require low latency and high throughput making it ideal for scenarios where performance is critical.
Now the next part of the question was storage service and your options are blob file table and folks you should
recommend blob service in this case.
Blob storage is optimized for storing massive amounts of unstructured data and it can handle frequently accessed data easily. Azure blob storage supports
easily. Azure blob storage supports object level immutability policies which can prevent the modification of data for a certain period satisfying the
requirement to prevent data modification for one year. So folks, I hope you now understand why those two options were chosen as the correct choice. But if you still have any doubts, please post them
in the comment section. Question number
section. Question number 20. You are planning an Azure IoT hub
20. You are planning an Azure IoT hub solution that will include 50,000 IoT devices. Each device will stream data
devices. Each device will stream data including temperature, device ID, and time data. Approximately 50,000 records
time data. Approximately 50,000 records will be written every second. The data
will be visualized in a near real time.
You need to recommend a service to store and query the data. Which two services can you recommend? Your options are
Azure table storage, Azure event grid, Azure Cosmos DBS SQL API, Azure time series insights. And folks, the two services
insights. And folks, the two services that you should recommend in this case are option C and option D, which is Azure Cosmos DBS SQL API and Azure time
series insights. Now, Azure Cosmos DB is a
insights. Now, Azure Cosmos DB is a highly scalable globally distributed database that can handle large volumes of data with low latency. The SQL API
allows for querying the data efficiently, making it suitable for real-time data ingestion and querying.
Now, Cosmos DB also supports high throughput scenarios, making it a good fit for handling 50,000 records per second. The next service, Azure Time
second. The next service, Azure Time series insights, is specifically designed for storing, querying, and visualizing time series data from IoT
devices. It provides realtime insights
devices. It provides realtime insights into the data which is perfect for scenarios where you need to visualize data in near real time. It can handle
high injection rates and provide powerful tools for analyzing timestamped data. Now let's look at why other
data. Now let's look at why other options are incorrect folks. Azure table
storage while scalable it's not designed for realtime querying and visualizing of large volumes of IoT data. It is more suitable for simple lowcost storage of
structured data. Then the last option which is
data. Then the last option which is Azure event grid. It is used for eventdriven architectures and routing events not for storing and querying
large data sets.
So folks, if you are liking the content, do not forget to hit the like button and subscribe the channel. You have an app named app one
channel. You have an app named app one that uses two onremise Microsoft SQL Server databases named DB1 and
DB2. You plan to migrate DB1 and DB2 to
DB2. You plan to migrate DB1 and DB2 to Azure. You need to recommend an Azure
Azure. You need to recommend an Azure solution to host DB1 and DB2. The
solution must meet the following requirements and the two requirements that are given are support server side transactions across DB1 and DB2 minimize
administrative effort to update the solution. What should you recommend?
solution. What should you recommend?
Your options are two databases on the same SQL server instance on an Azure virtual machine. Two Azure SQL databases
virtual machine. Two Azure SQL databases in an elastic pool. Two Azure SQL databases on different Azure SQL
database servers. Two database on the
database servers. Two database on the same Azure SQL managed instance. Friends, first key to rule out
instance. Friends, first key to rule out an option is minimize administrative effort which means we are looking for a managed service. So we can rule out
managed service. So we can rule out option A as SQL server on Azure virtual machine will introduce administrative effort.
Now second key to rule out option is support for serverside transactions and for that very reason we will have to go
with option D. Two databases on same Azure SQL managed instance. Now a
serverside distributed transactions using transact SQL are available only for Azure SQL managed instance. If the
question was talking about client side transactions then Azure SQL database was also the right choice. So that is why I stressed upon that serverside
distributed transaction is something which is helping us eliminate some incorrect choices. Now the distributed
incorrect choices. Now the distributed transactions can be executed only between instances that belong to the same server trust group. In this
scenario, manage instances need to use link server to reference each other. Now folks, before we proceed
other. Now folks, before we proceed forward, I wanted to let you know about my presence on topnate. Whether you are preparing for certifications, shifting into DevOps or cloud roles, looking for
hands-on guidance with tools like Jenkins, GitHub actions, anible, terraform and so on, or seeking help with your resume, job search or mock interviews. I offer tailored one-to-one
interviews. I offer tailored one-to-one mentoring sessions to give you a clear, confident path forward. Feel free to use this link to book a session with me and
I'll be more than happy to guide you through the journey. Link to my topnet profile will be available in the description section of this video. And
if you have any queries, you can always reach out to me at devopshhub2023@gmail.com before making the booking and I'll be happy to sort those out for you. This is question
number 22 of the series. You have an onpremise database that you plan to migrate to Azure. You need to design the database architecture to meet the
following requirements and your requirements are the architecture should support scaling up and down. It should
support geor redundant backups. It
should support a database of up to 75 terabyte. And it should be optimized for
terabyte. And it should be optimized for online transaction processing.
OLTP what should you include in the recommendation? Now the first part of
recommendation? Now the first part of the question is around the service that you should include and your options are Azure SQL database, Azure SQL managed
instance, Azure Synapse Analytics, SQL Server on Azure virtual machines. Folks, the suitable service
machines. Folks, the suitable service for the given requirement will be Azure SQL database. Now, Azure SQL database is
SQL database. Now, Azure SQL database is a fully managed database service with built-in capabilities for scalability, geor redundancy, and backup and it is
designed for OOLTP workloads. Let's now understand why
workloads. Let's now understand why other options are incorrect here. Now,
option B, Azure SQL managed instance does not support over 8 terbte database sizes.
Option C, Azure Synapse Analytics is best suited for data warehousing and analytics which is OLAP. And the last option, SQL server on
OLAP. And the last option, SQL server on Azure virtual machine is not ideal as it's not a managed service. Now the next part of the
service. Now the next part of the question is talking about which service tier should you use for the service that you chose in the previous part of the
question. Your options are basic,
question. Your options are basic, business critical, general purpose, hypers scale, premium and standard. And folks, you will need to
standard. And folks, you will need to choose hypers scale service tier because it supports up to 100 terabyte data size and our requirement is to have a support
of 75 terabyte which is not supported by any other service tiers. Let's look at next question.
tiers. Let's look at next question.
You have an Azure subscription that contains a storage account. An
application sometimes writes duplicate files to the storage account. You have a PowerShell script that identifies and deletes duplicate files in the storage
account. Currently, this script is run
account. Currently, this script is run manually after approval from the operations manager. You need to
operations manager. You need to recommend a serverless solution that performs the following actions. runs the
script once an hour to identify whether duplicate files exist. Sends an email notification to the operations manager requesting approval to delete the
duplicate files. Process an email
duplicate files. Process an email response from the operations manager specifying whether the deletion was approved. Runs the script if the
approved. Runs the script if the deletion was approved. What should you include in the recommendation? Your
options are combination of different services. And the first one is Azure
services. And the first one is Azure logic apps and Azure event grid, Azure pipelines and Azure service
fabric, Azure logic apps and Azure functions, Azure functions and Azure batch. Folks, you can use a combination
batch. Folks, you can use a combination of Azure logic apps and functions to achieve the given requirement.
Now you use Azure logic apps to create a workflow that runs the PowerShell script once an hour using a timebased trigger, sends an email notification to the
operations manager for approval and processes the email response. Now you use AGR functions to
response. Now you use AGR functions to host the PowerShell script which can be triggered by the logic app when the operations manager approves the deletion
as well. Combining Azure logic apps and
as well. Combining Azure logic apps and Azure functions will provide the necessary components to meet the requirement of this scenario. Question number
scenario. Question number 24. You are designing an application
24. You are designing an application that will aggregate content for users.
You need to recommend a database solution for the application. The
solution must meet the following requirements. It should support SQL
requirements. It should support SQL commands. It should support multim
commands. It should support multim masteraster rightes and guarantee low latency read operations. What should you include in
operations. What should you include in the recommendation? Your options are
the recommendation? Your options are Azure SQL database that uses active geo replication, Azure SQL database hypers
scale, Azure database for postgrade SQL, Azure Cosmos DBS SQL API. For an application that requires
API. For an application that requires SQL support, multim masteraster writes and low latency read operations, the best recommendation is Azure Cosmos DB
with the SQL API. Cosmos DB provides a SQL API
API. Cosmos DB provides a SQL API allowing you to query the data using SQL like commands and supports multim masteraster configurations allowing
write operations in multiple regions.
This ensures high availability and enables low latency rights from any region. Now Cosmos DB also offers
region. Now Cosmos DB also offers automatic distribution of data across multiple regions with configurable read replicas that provide low latency reads
globally.
So folks, if you still have any doubts in why I have chosen Azure Cosmos DBSQL API, then please post your doubts in the comment section and I'll try to address
them as soon as possible. You have an on-premise network
possible. You have an on-premise network and an Azure subscription. The
on-premise network has several branch offices. A branch office in Toronto
offices. A branch office in Toronto contains a virtual machine named VM1 that is configured as a file server.
users access the shared files on VM1 from all the offices. You need to recommend a
offices. You need to recommend a solution to ensure that the users can access the shared files as quickly as possible if Toronto branch office is
inaccessible. What should you include in
inaccessible. What should you include in the recommendation? Your options are a
the recommendation? Your options are a recovery services vault and Windows server backup, Azure blob container and Azure file sync, a recovery services
vault and Azure backup. An Azure
fileshare and Azure file sync. And
folks, the correct answer here is option D. An Azure fileshare and Azure File
D. An Azure fileshare and Azure File Sync. Azure Fileshare provides a
Sync. Azure Fileshare provides a cloud-based SMB file share that can be mounted to any office, ensuring fast
access to files, even if the original on-prem VM is unavailable. And Azure file sync allows
unavailable. And Azure file sync allows you to centralize your file share in Azure and sync files from your on-premise VM to an Azure file share. In
case the Toronto branch office is inaccessible, users can access the files directly from the Azure fileshare without interruption. By caching
without interruption. By caching frequently accessed files in Azure, Azure file sync ensures that users can still access important files
quickly. Folks, let's look at question
quickly. Folks, let's look at question number 26. You have an Azure subscription. The
26. You have an Azure subscription. The
subscription contains 100 virtual machines that run Windows Server 2022 and have the AGR monitor agent installed. You need to recommend a
installed. You need to recommend a solution that needs the following requirements. Forwards JSON formatted
requirements. Forwards JSON formatted logs from the virtual machine to a log analytics workspace. Transforms the logs
analytics workspace. Transforms the logs and stores the data in a table in the log analytics workspace. What should you include in the recommendation? And the
first part of the question is to forward the logs. What should you include? And
the logs. What should you include? And
your options are a linked storage account for the log analytics workspace, an Azure monitor data collection endpoint, a service
endpoint. Folks, for forwarding the
endpoint. Folks, for forwarding the logs, you should use Azure monitor data collection endpoint.
Azure monitor data collection endpoint also called as DCE is designed to forward logs from various sources including virtual machines to a log
analytics workspace. It allows you to
analytics workspace. It allows you to collect custom log data in JSON format and forward it for analysis.
DC in combination with DCR which is data collection rules allows you to transform log data before it is ingested into log
analytics and after transformation the logs are stored in a custom or predefined table in the log analytics workspace. Now folks the next part of
workspace. Now folks the next part of this question is what should you include in the recommendation to transform the logs and store the data? Your options
are a KQL query, a WQL query, an XPath query. To transform the logs and store
query. To transform the logs and store the data in Azure Log Analytics, you should use KQL queries. KQL query language is used by
queries. KQL query language is used by Azure Log Analytics to query and transform data in the workspace. KQL
allows you to parse, filter, and manipulate the data being stored in the log analytics tables. It is specifically designed for log queries and data
manipulation in the Azure environment. Now friends, let's
environment. Now friends, let's understand why other options are incorrect.
WQL which is WI query language is used to query information from Windows management instrumentation which is WI typically for performance or
configuration data on Windows system but it is not used for transforming and querying logs in Azure log analytics.
Now, XPath is a query language used for selecting nodes from XML documents, not for querying or transforming log data in log analytics. Folks, if you are liking the
analytics. Folks, if you are liking the content, do not forget to hit the like button and subscribe the channel. Question number
channel. Question number 27. You have a multi-ter app named app 1
27. You have a multi-ter app named app 1 and an Azure SQL database named SQL 1.
The backend service of app 1 writes data to SQL 1. Users use the app 1 client to read the data from SQL 1. During periods
of high utilization, the users experience delays retrieving the data.
You need to minimize how long it takes for data requests. What should you include in the solution? Your options
are Azure cache for radius, Azure content delivery network, Azure data factory, Azure Synapse Analytics. Friends, to minimize delays
Analytics. Friends, to minimize delays in retrieving data during periods of high utilization in the app, the best solution would be Azure cache for
radius because it stores frequently accessed data in memory allowing for faster data retrieval compared to querying the database every time. It is
ideal for improving application performance by reducing latency and offloading the backend database during high utilization periods. Now let's
understand why other options are incorrect. Azure content delivery
incorrect. Azure content delivery network is used for delivering static content which is images or videos from globally distributed servers. It is not designed for improving performance of
database-driven applications. Azure data factory is a
applications. Azure data factory is a data integration service that automates data movement and transformation between sources. It's not relevant for caching
sources. It's not relevant for caching or minimizing database query response times. And Azure Synapse Analytics is
times. And Azure Synapse Analytics is designed for large scale data analysis and business intelligence which is OLAP which is not suited for low latency
high-speed transactional data access which is OLTP as required in this case. Question number 28 of the series.
case. Question number 28 of the series.
You are designing an app that will include two components. The components
will communicate by sending messages via a Q. You need to recommend a solution to
a Q. You need to recommend a solution to process the messages by using a first in first out pattern which is FIFO pattern.
What should you include in the recommendation? Your options are storage
recommendation? Your options are storage cues with a custom metadata setting, Azure service bus cues with partitioning
enabled, Azure service bus cues with sessions enabled, storage cues with a stored access policy. Folks, you should use Azure
policy. Folks, you should use Azure service bus cues with session enabled.
In this case, this ensures FIFO messaging because session enabled cues allow the messages that share the same session ID to be processed in the order
they were sent. Now, let's understand why other options are incorrect. Storage cues with custom
incorrect. Storage cues with custom metadata setting. Folks, storage cues do
metadata setting. Folks, storage cues do not natively support FIFO. Then the next option, Azure service bus cues with partitioning enabled. Partitioning does
partitioning enabled. Partitioning does not guarantee FIFO as it distributes message across multiple partitions. And the next option was
partitions. And the next option was storage cues with a stored access policy. This is for controlling access
policy. This is for controlling access not for message ordering. You have an Azure subscription
ordering. You have an Azure subscription that contains the resources shown in the following table. You have three
following table. You have three resources which are virtual machine type and then you have three resources which are virtual network type.
Now VM1 is a front-end component in the central US Azure region. VM2 is a backend component in the east US Azure
region. And VM3 is a backend component
region. And VM3 is a backend component in the East US2 Azure region. VNET 1 hosts VM1. VNET 2 hosts
region. VNET 1 hosts VM1. VNET 2 hosts VM2 and VNET 3 hosts VM3.
You create peering between VNET 1 and VNET 2 and between VNET 1 and VNET 3.
The virtual machines hosts an HTTPS-based client server application and are accessible only via the private
IP address of each virtual machine. You
need to implement a load balancing solution for VM2 and VM3. The solution
must ensure that if VM2 fails, requests will be routed automatically to VM3 and if VM3 fails, requests will be routed
automatically to VM2. What should you include in the solution? And folks, your options are
solution? And folks, your options are Azure Firewall Premium, Azure Application Gateway version two, across region load
balancer, Azure Front Door Premium. Friends, there are several
Premium. Friends, there are several factors that you need to take into consideration while answering this question. First one being requirement is
question. First one being requirement is to set up load balancing. So you can automatically rule out option A Azure firewall premium as it is for managing
and filtering network traffic not load balancing. Now the second factor you
balancing. Now the second factor you need to take into consideration is that VMs are located in different Azure regions which is central US, East US and
East US2. Based on this you can rule out
East US2. Based on this you can rule out option B. Azure application gateway is
option B. Azure application gateway is region specific and is designed to handle traffic within a single region. Now third and final requirement
region. Now third and final requirement is to make use of private IPs and for that very reason you will rule out the use of cross region load balancer. While
both Azure Front Door premium and cross region load balancer support load balancing across regions, the cross region load balancer does not support
the use of private IPs. You can refer to the link on your screen to understand all the limitations of cross region load balancers. Next question. You have an
balancers. Next question. You have an Azure subscription that contains the SQL servers on Azure shown in the following
table. Now you have two SQL server with
table. Now you have two SQL server with the name of SQL SVR1 and SQL SVR2. SQL
SVR1 is in resource group RG1 and is hosted in East US location whereas SQL server 2 is in RG2 and is hosted in west
US location. The subscription contains
US location. The subscription contains the storage accounts shown in the following table. You have two storage
following table. You have two storage accounts with the name of storage one and storage two. Storage one is in RG1 resource group and is hosted in East US
location and is of storage version two which is general purpose V2 storage kind. Storage 2 is hosted in RG2 and is
kind. Storage 2 is hosted in RG2 and is located in central US region and is of kind blob storage. You create the Azure
SQL databases shown in the following table. You basically create three SQL
table. You basically create three SQL databases with the name of SQL DB1, DB2 and
DB3. SQL DB1 and DB2 are in SQL Server 1
DB3. SQL DB1 and DB2 are in SQL Server 1 whereas SQL DB3 is in SQL Server 2. Now
the pricing tier for SQL DB1 and SQL DB2 is standard whereas SQL DB3 is premium.
Now for each of the following select yes if the statement is true otherwise select no. Your first statement is when
select no. Your first statement is when you enable auditing for SQL DB1 you can store the audit information to storage
one. This statement is correct folks.
one. This statement is correct folks.
You will be able to store audit information for SQL DB1 in storage one.
We will shortly head to Azure portal to do a practical around this to understand the factors involved. But first let's look at other statements of this question. Now the next statement is when
question. Now the next statement is when you enable auditing for SQL DB2 you can store the audit information to storage
too. This is incorrect and not possible
too. This is incorrect and not possible folks. Let's look at third statement.
folks. Let's look at third statement.
When you enable auditing for SQL DB3 you can store the audit information to storage too. This is also an incorrect
storage too. This is also an incorrect statement. You cannot store SQL DB3
statement. You cannot store SQL DB3 audit information to storage 2. Now to
understand the reason behind why a particular statement was chosen as right or wrong, I have created a Azure SQL server and then I have created multiple databases in it and then I have created
different storage accounts to replicate the kind of scenarios that we had in our question. I have tried to do a bit more
question. I have tried to do a bit more than what is covered in the question so that you are prepared for certain variations of these type of questions as
well. So what I have done is I have
well. So what I have done is I have created a Azure SQL server which is in west US region. Okay. And then I have
created two databases in Azure SQL server. Both of the databases are
server. Both of the databases are basically one of the database is a standard pricing tier and the other one is using premium pricing tier because
that is what was done in our question.
Now there was a talk about storage accounts. Let's go back to the storage
accounts. Let's go back to the storage accounts and have a look at them. Now
the storage accounts that I have created are into different resource groups. I've
created three storage accounts. Two of
them are in a different resource group than the one where uh the Azure SQL server is hosted. And folks, the two of
these storage accounts are in same location as the Azure SQL server whereas one storage account is in a different
location. Now let's go back to our uh
location. Now let's go back to our uh Azure SQL server and uh explore the databases and try to configure the audit
logging. So let's click on SQL
logging. So let's click on SQL databases. Let's first try to do
databases. Let's first try to do something in standard pricing tier database. So how do you configure uh
database. So how do you configure uh audit logging? Basically you scroll down
audit logging? Basically you scroll down in the left hand menu under security you will see auditing and now you have to just scroll down because the auditing is
not enabled at the instance level that is the reason why it says server level auditing is disabled uh but we are going to enable it at database level so what
we'll do is enable Azure SQL auditing if you enable it you have option to choose the destination Now there are three type of destinations that are supported. It
is a storage, log analytics and event hub. To mimic the scenario in our
hub. To mimic the scenario in our question, we are going to choose a storage. Now it is giving us to choose a
storage. Now it is giving us to choose a subscription. I'll go ahead and choose
subscription. I'll go ahead and choose my subscription. Now here you can see I
my subscription. Now here you can see I have created three storage accounts but only two storage accounts are showing up
here because they are in the same region as our Azure SQL instance.
Let's go back and confirm that. So how you will confirm? We'll
that. So how you will confirm? We'll
we'll go back to the overview section and you can see the location of the Azure SQL server is West US. Now I'll go back to my storage accounts and you can
see two storage accounts are in West US.
What this tells us is no matter which resource group your storage accounts are created in, whether they are in the same resource group as
your Azure SQL server or they are in different resource group doesn't really matter. The only thing that matters here
matter. The only thing that matters here is they need to be in the same region as the Azure SQL server. That is the first part that you have to take into
consideration while answering this question. Now the now the next data that
question. Now the now the next data that was there in the question was focused on the pricing tier of the databases as well. So let's go ahead and see if
well. So let's go ahead and see if pricing tier makes any difference. Again
we'll go to the uh databases and then we will um we are already into the database. I have already opened the
database. I have already opened the database and what we'll try to do is again try to enable it.
So here what you will see is basically uh in in in first database which is SQL DB1407 both these storage accounts are
showing up and uh this storage uh this this database was I guess uh let's scroll back this was using a standard
pricing tier so both the storage accounts are still showing up now let's go back to the other database which was
SQL DB 1989 which is a premium tier. Uh
let's try to enable auditing for this and see how many storage accounts are showing up. So here you can see I'll
showing up. So here you can see I'll again go through the same options and I can see both the storage accounts are showing up here. What this tells us is pricing tier doesn't really matter in
this case. Resource groups also don't
this case. Resource groups also don't really matter in this case. And even the account kind also doesn't matter in this
case. What matters is the location of
case. What matters is the location of the SQL server and the location of the storage accounts. If they are in the
storage accounts. If they are in the same location, then you can go ahead and use them for your audit logging. If they
are not in the same location, you can't use them.
This is question number 31 of the series. You have SQL server on an Azure
series. You have SQL server on an Azure virtual machine. Databases are written
virtual machine. Databases are written nightly as part of a batch process. You
need to recommend a disaster recovery solution for the data. The solution must meet the following requirements. Provide
the ability to recover in the event of a regional outage. Support a recovery time
regional outage. Support a recovery time objective which is RTO of 15 minutes.
Support a recovery point objective which is RPO of 24 hours. Support automated
recovery. Minimize costs. What should
you include in the recommendation? Your
options are Azure virtual machine availability sets, Azure disk backup and always on availability group, Azure site recovery.
Friends, the right Azure service for this use case would be option D, Azure site recovery. Whenever there are such
recovery. Whenever there are such questions which are very datadriven and it's not possible to remember everything, then apply elimination approach. So the first option
approach. So the first option availability sets only ensure that VMs within the same region are spread across different fall domains and update
domains. They do not protect against
domains. They do not protect against regional outages. So it can be ruled
regional outages. So it can be ruled out. Then the next option disk backup
out. Then the next option disk backup provides data protection but does not provide automated failover or protection in case of regional outages. So you can
rule this one out as well. Now the third option always on availability groups offer high availability and disaster recovery but require SQL server
enterprise edition which is more expensive. So we can rule it out as we
expensive. So we can rule it out as we need to minimize costs as part of one of the requirements. So folks, I hope you
the requirements. So folks, I hope you now understand why Azure site recovery has been chosen as the correct answer in this case. Next question. You have a net
this case. Next question. You have a net web service named service one that performs the following tasks. Reads and
writes temporary files to the local file system. Writes to the application event
system. Writes to the application event log. You need to recommend a solution to
log. You need to recommend a solution to host service one in Azure. The solution
must meet the following requirements. Minimize maintenance
requirements. Minimize maintenance overhead. Minimize costs. What should
overhead. Minimize costs. What should
you include in the recommendation? Your options are an
recommendation? Your options are an Azure app service web app, an Azure virtual machine scale set, an app service environment which is ASC, and
Azure functions app. Folks, Azure App Service is the
app. Folks, Azure App Service is the optimal choice as it provides a fully managed, scalable, and cost-effective platform for hosting a .NET web service
with minimal maintenance overhead.
While Azure functions, virtual machine scale sets and app service environments can also host web services, they will not provide the same balance of minimal
maintenance overhead and cost effectiveness as Azure app service web apps do in this scenario. Next
scenario. Next question. Your company has 300 virtual
question. Your company has 300 virtual machines hosted in a VMware environment.
The virtual machines vary in size and have various utilization levels. You
plan to move all the virtual machines to Azure. You need to recommend how many
Azure. You need to recommend how many and what size Azure virtual machines will be required to move the current workloads to Azure. The solution must
minimize administrative effort. What
should you use to make the recommendation? Your options are Azure
recommendation? Your options are Azure pricing calculator, Azure advisor, Azure migrate and Azure cost
management. Folks, Azure Migrate will be
management. Folks, Azure Migrate will be best suited for this use case. Let's head to MS documentation to
case. Let's head to MS documentation to understand this in more detail.
So folks, let's go through discovery and assessment tool which is part of Azure Migrate as that will help us in answering why we have chosen Azure Migrate as the correct choice. In this
case, the Azure Migrate discovery and assessment tool discovers and assesses onremise VMware VMs, HyperVMs, and physical servers for migration to Azure.
Here's what the tool does. Azure
readiness assesses whether onremise server, SQL server and web apps are ready for migration to Azure. This is
one of the requirements of our question.
Azure sizing estimates the size of Azure VMs. Azure SQL configuration. Number of Azure VMware
configuration. Number of Azure VMware solution nodes after migration. This is
what the critical part is that has helped us in answering this question because we are after how many VMs and watch size Azure virtual machines will be required to
move the current workloads to Azure.
Azure cost estimation estimates costs for running onremise servers in Azure. You get an idea around the costs as well. Then
dependency analysis identifies cross-s server dependencies and optimization strategies for moving interdependent servers to Azure. So folks, I hope you now
Azure. So folks, I hope you now understand why Azure Migrate has been chosen as the correct answer in this case. But if you still have any doubts,
case. But if you still have any doubts, please post them in the comment section.
Question number 34 of the series. You
are deploying a sales application that will contain several Azure cloud services and handle different components of a transaction. Different cloud services
transaction. Different cloud services will process customer orders, billing, payment, inventory, and shipping. You
need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using XML messages. What should you
include in the recommendation? Your
options are Azure service fabric, Azure data lake, Azure service bus and Azure traffic manager. Folks, the best
traffic manager. Folks, the best recommendation for enabling asynchronous communication between the cloud services using XML messages is Azure service bus.
Azure service bus is designed for asynchronous messaging between distributed applications. It allows
distributed applications. It allows cloud services to communicate reliably even if they are running at different times or speeds. It supports sending XML
messages between services, enabling a message-based architecture where each cloud service can process different parts of a transaction and send messages to other
services. Let's understand why the
services. Let's understand why the options are incorrect. Azure service
fabric is a micros service orchestration platform but it is primarily focused on hosting and managing microservices. Azure data lakeink is
microservices. Azure data lakeink is designed for big data storage and analytics whereas Azure traffic manager is a DNS-based traffic routing service
used to distribute incoming traffic across multiple endpoints. Let's look at next question.
endpoints. Let's look at next question.
You have data files in Azure blob storage. You plan to transform the files
storage. You plan to transform the files and move them to Azure data lake storage. You need to transform the data
storage. You need to transform the data by using mapping data flow. Which
service should you use? Your options are Azure datab bricks, Azure storage sync, Azure data factory, Azure datab box
gateway. Folks, you should use Azure
gateway. Folks, you should use Azure data factory in this case.
Azure Data Factory is a cloud-based data integration service that allows you to create, schedule, and manage data pipelines that can move and transform
data across different sources and destinations, including Azure Blob Storage and Azure Data Link Storage. As
always, let's understand the use case of other services. Azure datab bricks is a
other services. Azure datab bricks is a cloud-based analytics platform that allows you to process large amounts of data using Apache Spark. It can also be
used for data transformation and ETL, but it requires more technical expertise and development effort than using Azure data factory mapping data flows.
Azure storage sync is a service that allows you to sync onremise file servers with Azure file shares, but it does not support data transformation. And friends, Azure datab
transformation. And friends, Azure datab box gateway is a hardware device that allows you to transfer large amounts of data to Azure, but it does not support
data transformation using mapping data flow. A company plans to implement an
flow. A company plans to implement an HTTP based API to support a web app. The
web app allows customers to check the status of their orders. The API must meet the following requirements.
Implement Azure functions. Provide
public read only operations. Prevent
write operations. You need to recommend which
operations. You need to recommend which HTTP methods and authorization level to configure. What should you recommend?
configure. What should you recommend?
Now the first part of the question is talking about HTTP methods and your options are API methods get only get and
post only get post and options only.
Folks you should use the get HTTP method in this case as it is typically used for retrieving data without modifying it and there is a requirement in the question
to prevent write operations. Now the next part of the
operations. Now the next part of the question was focused around authorization level and your options are function anonymous and admin. For the
authorization you can set it to anonymous. Since the API needs to be
anonymous. Since the API needs to be publicly accessible for readonly operations, this will enable any user to access the API without providing a key
allowing them to check the status of their orders as required. Both the other option require passing some sort of key which is not ideal or a practical
solution for this use case. Next
case. Next question. Your onremise network contains
question. Your onremise network contains a file server named server one that stores 500GB of data. You need to use
Azure data factory to copy the data from server one to Azure storage. You add a new data factory. What should you do next? Now in the questions the first
next? Now in the questions the first part is what should you do from server one your options are install an Azure
file sync agent install a self-hosted integration runtime install the file server resource manager ro service folks
you will have to install self-hosted integration runtime a self-hosted integration runtime can run copy activities between a cloud data store
and a data store in a private network.
It also can dispatch transform activities against compute resources in an on-premise network or an Azure virtual network. The installation of a
virtual network. The installation of a self-hosted integration runtime needs an on-premise machine or a virtual machine inside a private network. That is the
reason to choose self-hosted integration runtime installation. Now the next part
runtime installation. Now the next part of the question is from the data factory and your options are create a pipeline, create an
Azure import export job, provision an Azure SQL server integration services which is SSIS integration runtime and
folks you will need to create a pipeline. A pipeline is a logical
pipeline. A pipeline is a logical grouping of activities that perform the task of copying the data from server one to Azure
storage. Question number
storage. Question number 38. You plan to deploy Azure datab
38. You plan to deploy Azure datab bricks to support a machine learning application. Data engineers will mount
application. Data engineers will mount an Azure data lake storage account to the datab bricks file system.
Permissions to folders are granted directly to the data engineers. You need
to recommend a design for the planned data brick deployment. The solution must meet the
deployment. The solution must meet the following requirements. Ensure that the data
requirements. Ensure that the data engineers can only access folders to which they have permissions. Minimize
development effort. Minimize
costs. What should you include in the recommendation? And the first part of
recommendation? And the first part of the question is talking about datab bricks SKU and you only have two options which is premium or standard. And folks you will have to
standard. And folks you will have to choose premium SKU of the datab bricks.
Choosing premium SKU is driven by second part of this question. So let's first look at it to understand it in more detail.
Cluster configuration. Your options are credential pass through managed identities MLflow a runtime that
contains photon secret scope and friends you will have to use credential pass through to meet the given requirements.
Now credential pass through allows users to authenticate with Azure data lakeink storage using their own enter ID credentials. This minimizes development
credentials. This minimizes development effort and costs as it does not require additional enter ID application registration and service principle
management. And folks, premium data
management. And folks, premium data bricks SKU is required for credential pass through which is why we can't use a standard SKU in this case. Next
question.
You need to recommend a solution to generate a monthly report of all the new Azure manager resource deployments in your Azure subscription. What should you
include in the recommendation? Your
options are Azure activity log, Azure Advisor, Azure Analysis Services, Azure Monitor Action Groups.
Book's Azure activity log tracks all management operations in an Azure subscription including resource deployments. It provides detailed
deployments. It provides detailed records of when resources were created, modified or deleted. You can query the activity log for deployment events and
filter by the resource type, time range and specific actions like resource creation. The logs can be exported to
creation. The logs can be exported to log analytics storage or event hubs for further analysis or reporting making it easier to generate a monthly report. Now
folks, let's understand why other options are incorrect. Azure advisor
provides recommendations for optimizing your Azure resources but does not track resource deployment events.
Then Azure analysis service is a data modeling and analytics service not meant for tracking resource management events.
And the final option Azure monitor action groups triggers notifications or actions based on Azure monitor alerts but they don't track resource
deployments. So folks, if you are liking
deployments. So folks, if you are liking the content, do not forget to hit the like button and subscribe the channel.
Let's look at question number 40 of the series.
You have an Azure app service web app that uses a system assigned managed identity. You need to recommend a
identity. You need to recommend a solution to store the settings of the web app as secrets in an Azure key vault. The solution must meet the
vault. The solution must meet the following requirements. Minimize changes
following requirements. Minimize changes to the app code. Use the principle of least privilege. What should you include
least privilege. What should you include in the recommendation? And the first part of the question is talking about keywalt integration method. And your
options are key references in application settings, keywalt references in app settings.json, keywalt references in
settings.json, keywalt references in web.config, keywalt SDK. And folks, you
web.config, keywalt SDK. And folks, you should use keywalt references in application settings. This method allows
application settings. This method allows you to reference keywalt secrets directly in the Azure app service application settings avoiding the need
of code changes which is one of the requirement. The secrets are injected as
requirement. The secrets are injected as environment variables into the app minimizing changes to the code and keeping things secure. You only need to update the application settings of the
app service in the Azure portal by referencing the Azure key secret URI. No
changes to app settings or web doconfig are required in this case. Now the next part of the question is key permissions
for the managed identity and your options are get permission on keys, list and get permission on keys, get
permissions on secrets, list and get permissions on secret.
Following the principle of least privilege, you should be granting the get permissions for secrets. This
ensures that the app can read the secrets but not list or manage them.
Question specifically says to save the settings as secret. So option A and option B which are related to keys are automatically
incorrect. Next question
incorrect. Next question friends. You have an AGR subscription.
friends. You have an AGR subscription.
The subscription contains a tiered app named app one that is distributed across multiple containers hosted in Azure container
instances. You need to deploy an Azure
instances. You need to deploy an Azure monitor monitoring solution for app. The
solution must meet the following requirements. Support using synthetic
requirements. Support using synthetic transaction monitoring to monitor traffic between the app one components.
Minimize development effort.
What should you include in the solution?
Your options are network insights, application insights, container insights, log analytics, workspace
insights. Folks, you should include
insights. Folks, you should include application insights in your recommendation. Application insights
recommendation. Application insights provides synthetic transaction monitoring through its availability tests. These tests can simulate user
tests. These tests can simulate user interactions or monitor traffic between app components to ensure that each service is available and responding
correctly. Azure container instances
correctly. Azure container instances support built-in integration with application insights to automatically monitor container performance, telemetry and
dependencies. Now folks, let's
dependencies. Now folks, let's understand why other options are incorrect.
Network insights is primarily used for monitoring network health, performance and traffic flows. Container insights is used for monitoring the health and
performance of containerized workloads which is CPU memory. Log analytics
workspace insights allow you to centralize logs and analyze them across resources.
So folks, I hope you now understand why application insights which is option B has been chosen as the correct answer in this case. Question number
case. Question number 42. You have 12 Azure subscriptions and
42. You have 12 Azure subscriptions and three projects. Each project uses
three projects. Each project uses resources across multiple subscriptions.
You need to use Microsoft cost management to monitor costs on a per project basis. The solution must
project basis. The solution must minimize administrative effort. Which
two components should you include in the solution? Your options are budgets,
solution? Your options are budgets, resource tags, custom role based access control, which is arbback roles, management
groups Azure boards. Folks, the two key components
boards. Folks, the two key components you should include are budgets and resource tags. Now, budgets in Microsoft
resource tags. Now, budgets in Microsoft cost management enable you to set spending limits and track costs against those limits. You can create separate
those limits. You can create separate budgets for each project and monitor the cost trends, helping you manage projects based cost effectively.
Resource tags allow you to categorize and organize Azure resources across multiple subscriptions by applying custom metadata like project name,
environment and so on to resources. This
is ideal for tracking and monitoring costs based on specific projects. Now, as always, let's
projects. Now, as always, let's understand why other options are not applicable here. Custom arbback roles
applicable here. Custom arbback roles can be used for managing permissions.
They are not directly related to cost monitoring. Azure management groups
monitoring. Azure management groups provide a level of scope above subscriptions. You organize
subscriptions. You organize subscriptions into containers called management groups and apply your governance conditions to the management
groups. and Azure boards are part of
groups. and Azure boards are part of Azure DevOps for work item tracking and project management. You have an on-premise
management. You have an on-premise network that uses an IP address space of 172.16.0.0 with a prefix of 16. You plan
to deploy 30 virtual machines to a new Azure subscription. You identify the
Azure subscription. You identify the following technical requirements. All
Azure virtual machines must be placed on the same subnet named subnet one. All
the Azure virtual machines must be able to communicate with all onremise servers. The servers must be able to
servers. The servers must be able to communicate between the on-premise network and Azure by using a sightto-sightVPN. You need to recommend
sightto-sightVPN. You need to recommend a subnet design that meets the technical requirements. What should you include in
requirements. What should you include in the recommendation? Now to answer you
the recommendation? Now to answer you need to select the appropriate network address to the correct subnets. Each
network address may be used once more than once or not at all. And the network addresses are 172.16.0.0 with a prefix length of 16
172.16.1.0 with a prefix length of 27. 192.168.0.0
27. 192.168.0.0 0 with a prefix length of 24 and 192.168.1.0 with a prefix length of
27. You need to tell which network
27. You need to tell which network address should be used for subnet one and gateway subnet. Folks, you don't really need in-depth networking
knowledge to answer such type of questions. First key thing you need to
questions. First key thing you need to take into consideration while answering this question is you cannot have overlapping IP addresses between onremise and Azure virtual machines. So
you can rule out both network IP addresses containing 172.16 in it as that would overlap with onpremise address range of
172.16.0.0 with a prefix length of 16.
Now you need to deploy a total of 30 virtual machines and the first remaining C range with prefix of 24 will have 256
IP addresses and second one will have 32 IP addresses. Now friends remember Azure
IP addresses. Now friends remember Azure reserves five IP addresses in each range for internal purposes which leaves us
with only 27 IP addresses in the C with 27 prefix. That means the correct answer
27 prefix. That means the correct answer for subnet will be 192.168.0.0 with a prefix of 24. Now friends, the gateway subnet
24. Now friends, the gateway subnet would be 192.168.1.0 with a prefix of 27 because it ensures that the gateway subnet is
separate from the Azure VM subnet which is subnet 1 and also does not overlap with the onremise IP address space.
Next question. You are designing an
question. You are designing an application that will use Azure virtual machines to analyze video files. The
files will be uploaded from corporate offices that connect to Azure by using express route. You plan to provision an
express route. You plan to provision an Azure storage account to host the files.
You need to ensure that the storage account meets the following requirements. Supports video files of up
requirements. Supports video files of up to 7 TBTE. Provides the highest availability possible. Ensures that
availability possible. Ensures that storage is optimized for the large video files. Ensures that files from the
files. Ensures that files from the on-premise network are uploaded by using express route. How should you configure
express route. How should you configure the storage account? And the first part of the question focuses on choosing the storage account type. Your options are
premium file shares, premium page blobs, standard generalpurpose version two. Now friends, whenever you want to
two. Now friends, whenever you want to store video files, then always look for block blobs and the two storage that support block blobs are premium page
blobs and standard generalpurpose v2. Now friends, question specifically
v2. Now friends, question specifically focuses on providing the highest availability possible. So you will have
availability possible. So you will have to go with generalpurpose v2 storage as premium page blobs do not support GRS.
And the next part of the question talks about the type of data redundancy that you'll have to choose. Your options are zone redundant storage, locally
redundant storage and geor redundant storage. Now folks, following on from
storage. Now folks, following on from the previous explanation, redundancy type you will choose here is geor redundant storage as it offers the highest availability and durability by
replicating data to a secondary region. Now the final part of the
region. Now the final part of the question talks about networking. Your options are a route
networking. Your options are a route server, a private endpoint, a service endpoint. And folks, you will have to
endpoint. And folks, you will have to choose a private endpoint here as it ensures secure and direct connectivity to the storage account over express
route. Let's understand why other
route. Let's understand why other options are incorrect. Now, Azure route server is primarily designed to facilitate dynamic routing between the
virtual network appliances and virtual networks. It makes integration easier
networks. It makes integration easier for network virtual appliances such as firewalls or routers by using the border gateway
protocol. Service endpoints extend your
protocol. Service endpoints extend your VNET's private address space to Azure services over Microsoft's backbone, thereby providing secure access. They
enable you to use your virtual network's private IP to connect to Azure services like Azure storage. Service endpoints
ensure traffic to Azure services goes over Microsoft backbone but do not guarantee integration with express route. They will route traffic over the
route. They will route traffic over the internet unless there is a specific express route integration defined. Let's look at question number
defined. Let's look at question number 45.
You have 100 Microsoft SQL Server integration services which is SSIS packages that are configured to use 10 onpremise SQL server databases as their
destination. You plan to migrate the 10
destination. You plan to migrate the 10 onpremise databases to Azure SQL database. You need to recommend a
database. You need to recommend a solution to create Azure SQL Server integration services packages. The
solution must ensure that the packages can target the SQL database instances as their destinations. What should you include in
destinations. What should you include in the recommendation? Your options are
the recommendation? Your options are data migration assistant, Azure data catalog, SQL server migration assistant,
Azure data factory. Folks, DMA is primarily used to assess and migrate onremise database to Azure SQL databases or SQL server in Azure VMs. It does not
handle SSIS package migration or execution. So an incorrect choice. Now
execution. So an incorrect choice. Now
the next one is Azure data catalog which is a meta data management tool that helps organizations to register, discover and understand data sources and
in no way execute or manage SSIS packages. So another incorrect option.
packages. So another incorrect option.
Now SQL server migration assistant is typically used to migrate data from other databases which is like Oracle MySQL to SQL Server or Azure SQL
database. It's again not designed to
database. It's again not designed to migrate or execute SSIS packages which leaves us with the fourth and final option that is the correct
choice here. Azure data factory provides
choice here. Azure data factory provides Azure SSIS integration runtime that allows you to lift and shift your existing SSIS packages to Azure. You can
run your SSIS packages in the cloud with minimal changes allowing them to target Azure SQL database instance as destinations. Now refer to the link on
destinations. Now refer to the link on your screen to understand more about this. Next question.
this. Next question.
You have a Microsoft Entra tenant that syncs with an on-premise active directory domain. Your company has a
directory domain. Your company has a line of business application that was developed internally. You need to
developed internally. You need to implement SAML single sign on and enforce multiffactor authentication when users attempt to access the application from an unknown location. Which two
features should you include in the solution? Your options are Microsoft
solution? Your options are Microsoft intra privileged identity management, Azure application gateway, Microsoft
intra enterprise applications, Microsoft intra ID protection, conditional access policies. Folks, you should use the
policies. Folks, you should use the combination of intra enterprise applications and conditional access policies to implement the given requirement. Now, Microsoft Intra
requirement. Now, Microsoft Intra Enterprise applications is used to configure single sign on for SAS and custom applications, including those using SAML for
authentication. And conditional access
authentication. And conditional access policies allow you to enforce conditions such as requiring multiffactor authentication when accessing resources.
You can define a policy to enforce MFA when users are accessing the application from an unknown or untrusted location, ensuring higher security. Question
number 47. You have an AGR subscription. You
47. You have an AGR subscription. You
need to recommend a solution to provide developers with the ability to provision Azure virtual machines. The solution
must meet the following requirements.
Only allow the creation of the virtual machines in specific regions. Only allow
the creation of specific sizes of virtual machines. What should you
virtual machines. What should you include in the recommendation?
Your options are attribute based access control, Azure policy, conditional access policies, rolebased access control. Folks, this is one of the
control. Folks, this is one of the easiest question from AZ 300 exam perspective as anyone who knows the basic of Azure should be able to answer this question straight away. You will
use Azure policies in this case. Let's
head to Azure portal and understand the two policies that you can use to implement the given requirement. So
friends, we are now in Azure portal in the policy definitions. Let's explore
first policy which is allowed locations as that will help you in implementing the first requirement that is only allow the creation of the virtual machines in
specific regions. Now there is a policy
specific regions. Now there is a policy for allowed locations for resource group. We are not going to talk about
group. We are not going to talk about this at this stage because that is pretty much specific to resource groups.
We will talk about the global one which is allowed location. Now if you read the description of this policy, it says this policy enables you to restrict the locations your organization can specify
when deploying resources used to enforce your geo compliance requirements. It excludes resource group
requirements. It excludes resource group as we have already seen that there is a separate policy to restrict the resource group creation to certain locations. So
friends, this policy is global and not just limited to virtual machines. You
can restrict the creation of any resource to the given locations using this policy. Now let's look at another policy
policy. Now let's look at another policy which is more focused around the second requirement uh which was only allow the creation of specific sizes of virtual
machines. So friends, you can see there
machines. So friends, you can see there is a policy named allowed virtual machine size SKUs. If you open it, the description says the policy enables you
to specify a set of virtual machine size SKUs that your organization can deploy.
So this is what our question was pretty much focused on. So we already have inbuilt policies in Azure to implement both of the requirements that were in
our question. And even if the policy was
our question. And even if the policy was not there, you can always go ahead and create a custom policy. Question number 48. You plan to
policy. Question number 48. You plan to deploy an Azure app service web app that will have multiple instances across multiple Azure regions. You need to
recommend a load balancing service for the planned deployment. The solution
must meet the following requirements. Maintain access to the app
requirements. Maintain access to the app in the event of a regional outage.
Support Azure web application firewall.
Support cookie based affinity. Support
URL routing. What should you include in the recommendation? Your options are
the recommendation? Your options are Azure Front Door, Azure Traffic Manager, Azure Application Gateway, Azure Load
Balancer. Folks, you should include
Balancer. Folks, you should include Azure Front Door in your recommendation.
Azure front door is a global load balancer with instant failover capabilities. So it will maintain access
capabilities. So it will maintain access to the app in the event of regional outage. It supports Azure web
outage. It supports Azure web application firewall integration for security and also supports cookie based affinity for session stickiness. It also
has support for URL routing for directing traffic to different backend pools based on URL patterns.
Next question friends. You are designing a microservices architecture that will support a web application. The solution
must meet the following requirements.
Deploy the solution on premise and to Azure. Support low latency and hypers
Azure. Support low latency and hypers scale operations. Allow independent
scale operations. Allow independent upgrades to each micros service. Set
policies for performing automatic repairs to the microservices. You need to recommend a
microservices. You need to recommend a technology. What should you recommend?
technology. What should you recommend?
Your options are Azure container instance, Azure logic app, Azure service fabric, Azure virtual machine scale set.
And folks, you should include Azure Service Fabric in the recommendation.
Now, Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices and
containers. It supports deployment to
containers. It supports deployment to both onremise and to Azure providing a consistent platform for managing and deploying microservices. It enables low latency
microservices. It enables low latency and hypers scale operations as it is designed for building scalable and reliable applications. It allows
reliable applications. It allows independent upgrades to each micros service as it supports versioning and rolling upgrades. It also provides
rolling upgrades. It also provides built-in health monitoring and automatic repairs for the microservices with configurable policies. 50th question of the series.
policies. 50th question of the series.
You plan to automate the deployment of resources to Azure subscription. What is
the difference between using Azure blueprints and Azure resource manager templates? Your options are ARM
templates? Your options are ARM templates remain connected to the deployed resources. Only blueprints can
deployed resources. Only blueprints can contain policy definitions. Only ARM
templates can contain policy definitions. Blueprint remain connected
definitions. Blueprint remain connected to the deployed resources. Folks, the blueprint
resources. Folks, the blueprint preserves the relationship between the deployed application and the blueprint components. Whereas in the case of the
components. Whereas in the case of the ARM template, there remains no active relationship between your deployed application and template. This
connection helps in tracking and auditing the resources. Folks, there is a link on your screen. Go through the link to understand more about the differences between the ARM templates
and blueprints. So folks, that's all for
blueprints. So folks, that's all for this part of the series. If you have liked the content, do not forget to hit the like button and subscribe the channel. I'll be back soon with more
channel. I'll be back soon with more such questions in our AZ 3005 exam practice question series. friends.
Loading video analysis...