WUA and WSUS disagreeing on update GUID - windows-update

I have encountered updates that are marked as needed by WUA with update GUIDs that do not match the GUIDs for the same updates in WSUS. The WSUS server is synchronizing all Products and Categories and all languages.
So the question is how could WUA know of a patch that WSUS does not (or that WSUS identifies by a different GUID)?
One example is Windows Internet Explorer 9 for Windows Server 2008 R2 for x64-based Systems:
•WUA update GUID: d8ba5dbf-aade-4125-bbdf-48dcc5950131
•WSUS update GUID: bd9f0b80-866f-4ded-a6d9-ed74da717519
For patch management solutions that rely on the update GUID to be consistent between WUA and WSUS this poses a challenge to say the least.
Thanks for any help in advance,
Shady

This is a shot in the dark but perhaps it is a different revision? the IUpdateIdentity interface defines both a id GUID and a revision number.
IUpdateIdentity API
I'm intrigued myself why this would happen.

Related

custom program error: 0x3f metaplex candy machine createSetCollectionDuringMintInstruction

I have a metaplex candy machine and collection that I set up several weeks back. Minting worked initially but is now failing.
The error reported is
custom program error: 0x3f
Which appears to be from the nested instruction to the metadata program. Which should be
set_and_verify_collection
readonly code: number = 0x3f;
readonly name: string = 'DataTypeMismatch';
It can be thrown from metdata deserialize.
https://github.com/metaplex-foundation/metaplex-program-library/blob/master/token-metadata/program/src/state/mod.rs
Which is called for the token metadata and collection metadata data.
I believe those are the only two places it would be thrown from in this method. AccountInfo is resolved for several accounts but it's only deserialized into a typed entity, with size and type considerations for those two entities.
Checking the metadata, on the collection, it's present and the length looks normal for metaplex metadata accounts at 679 bytes.
Now the metadata for the token being minted is not present because the tx failed. However, if, I attempt a transaction without the 'SetCollectionDuringMint' instruction added, the tx succeeds.
Interesting. The metadata account for the token has zero bytes allocated.
I don't recall this changing. In fact, if I go through my source history to older revisions, I've not been explicitly requesting to create the metadata account. I've simply been pre-allocating the account and calling mint nft on the candy machine.
Did the candy machine change to no longer automatically create the metadata account for the minted NFT?
It occurred to me almost as soon as I finished typing up the question, what the likely cause was.
It came to my attention a few weeks back that this older v2 version of the candy machine, does not actually halt transaction execution on constraint violations, but rather, charges the client a fee , for executing the transaction incorrectly.
It's likely the 'bot tax' protocol is allowing the real error, which may be occurring earlier, to get suppressed.
v3 of the candy machhine has made this something you can disable but we are a bit coupled to v2 at the moment.
Anyhow, what I think has happened here is that the bot taxing version of the candy machine, allowed the nft to mint, but didn't actually finish setting it up. Then the next instruction, set collection during mint, was unable to complete.
The real failure is earlier in the transaction, somewhere during the mint, where we no longer meet the mint criteria, and the old version of the candy machine is just charging us and failing silently.
Unfortunately, the root cause is still not clear. One other change that would have occurred between now and then is that the collection is now 'live' having passed the go live date. I'll have to dig through the validation constraints and see if there are any bot tax related short circuits related to this golive date transition.
EDIT: UPDATE: Looks like there were some changes, specific to devnet's token metadata program and my machine was affected. I'll need some new devnet machines.

Crystal Reports DB Logon prompt although all table/report connections are made

Updating an older report system which was developed using VS 2008 and Crystal Reports. After updates, some reports started prompting for database login, while others work perfectly (with updates). Reports were changed to include new table and fields. All table and report document connections are established via common routine, similar to: SetDBLogon(myConnectionInfo, Me.CrystalReportViewer1.ReportSource)
Public Sub SetDBLogon(ByVal myConnectionInfo As ConnectionInfo, ByVal myReportDocument As ReportDocument)
Dim myTables As Tables = myReportDocument.Database.Tables
For Each myTable As CrystalDecisions.CrystalReports.Engine.Table In myTables
Dim myTableLogonInfo As TableLogOnInfo = myTable.LogOnInfo
myTableLogonInfo.ConnectionInfo = myConnectionInfo
Try
myTable.ApplyLogOnInfo(myTableLogonInfo)
Catch ex As Exception
MsgBox(ex.Message)
End Try
Next End Sub
It scans through each table sets the connection. Also scans sub-reports. Not sure what causes crystal reports to request login when it's already set specifically. When correct credentials are provided, it still fails to connect.
I've tried removing the report object and inserting the latest version.
Here's the issue and the solution to this problem (in my case).
Crystal Reports data sources can include, ADO .NET, OLE DB, ODBC, etc... with various drivers. The reports were created with a specific connection and driver, that no longer applied. I used a new database connection. Since the application scans each report and sets the correct connection parameters eventually, this would normally work, and it has worked in the past. But the problem was that the target system didn't have the right drivers for the connection provider I used. What made this harder to troubleshoot was that the connectivity piece in Crystal Reports is not very intuitive and duplicate connection names can be created with different providers -- same names different providers.
The solution was to open the report, go to :
Database Expert > Set Datasource location
and this is the key part:
Select the connection with the correct provider.
In my case, this was SQLOLEDB
You can right-click the connection and choose "Properties" and check the provider.
Another way to resolve it would be to install the correct drivers and versions. In this case, since the SQLOLEDB provider was installed and already worked, I decided to keep all the reports exclusively use that provider instead.
You may need to check providers installed to verify, a direct way is to check registry, for example, SQL Native client SQLNCLI10 can be found:
HLKM\SOFTWARE\Microsoft\SQLNCLI10

OPC UA unique ID

I'm trying to build a OPC UA client application.
I'd like to be able to identify a UA node uniquely in the OPC tree.
I know that in OPC DA, a standard node id is a string with a '.' as a delimeter that I can use in order to identify a node.
In OPC UA, the node ID doesn't have to be a string, but I'd still like to be able to build a unique string that maps to a particular node.
I'm thinking about basing it on the the nodes names. e.g.: Demo.MyNode.MyValue.
but I'm afraid that the node name can contain characters such as "." and this will make my IDs not unique.
Is there a character I can use as a delimeter?
Is there a better way to represent the node ID as a string (including its path)?
OPC-UA offers the concept of a unique "BrowsePath" for each and every node, and a client could opt to store BrowsePaths instead of NodeIds, and then upon startup call the TranslateBrowsePathsToNodeIds service.
In fact, I believe this may be the intended behavior, as there's no requirement that a server use the same NodeId for any given Node after restarting, even if in practice that's how it's done.
I was wrong about NodeId being allowed to change. The spec says: "A Server shall persist the NodeId of a Node, that is, it shall not generate new NodeIds when rebooting."
I now believe its best to store NodeIds and only use BrowsePaths to aid in programming against type definitions.
One of the features of OPC UA is that the server can offer different menu trees to different users. It may not matter for your client, since any given user will only see the one tree, and the BrowsePath will be unique for that user.
In v1.03 of Part 3 of the OPC UA spec, "OPC UA Part 3 - Address Space Model 1.03 Specification.pdf", section 5.2.2 says a server should not change a node's NodeId when it reboots. (The spec is available from the OPC Foundation at https://opcfoundation.org. You can register and download it for free.)
Of course, some UA servers might not maintain their NodeIDs across a reboot. Which is another reason to use Kevin's suggestion to use the BrowsePath to make a unique string for each node. The string can make it clearer to the user which node they're accessing. Good idea!
The OPC Foundation announced their “OPC UA Open Shared Source” Strategy (04/14/2015).
The stack for .NET including lots of samples for DA, Historie... clients and servers can freely be downloaded here OPCFoundation/UA-.NET on GitHub.
Also Build OPC UA .NET applications using C#, VB.NET
You can take a look at the samples in the "SampleApplications" directory and see how they do things...

How to programmatically change tier of Azure SQL Database

We have a large SQL Database running on Azure which is only generally in use during normal office hours, although from time to time, overtime/weekend staff will require performant access to the database.
Currently, we run the database on the S3 Tier during office hours, and reduce it to S0 at all other times.
I know that there are a number of example PowerShell scripts that can be used together with automation tasks to automatically modify the database tiers according to a predefined timetable. However, we would like to control it from within our own .Net application. The main benefit is that this would allow us to give control to admin staff to switch up the database tier during out-of-hours as required without the need for technical staff to get involved.
There are a number of articles/videos on the Microsoft site that mention "scaling up/down" (as opposed to "scaling out/in", i.e. creating/removing additional shards), but the sample code provided seems to deal exclusively with sharding, and not with vertical "scaling up/down".
Is this possible? Can anyone point me in the direction of any relevant resources?
You can, yes. You have to use the REST API to do the call to our endpoints and update the database.
The description and the parameters of the PUT required to the update is explained here -> Update Database Details
You can change the tier programmatically from there.
Yes, you can change database tiers using REST API and call Azure endpoints to update the tier.
The parameters to be used for PUT are explained on this msdn page: Update Database
This can now be done using multiple methods besides using the REST API directly.
https://learn.microsoft.com/en-us/azure/azure-sql/database/single-database-scale
See Azure CLI example
az sql db update -g mygroup -s myserver -n mydb --edition Standard --capacity 10 --max-size 250GB
See PowerShell example:
Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -DatabaseName "Database01" -ServerName "Server01" -Edition "Standard" -RequestedServiceObjectiveName "S0"
I just tried doing this via SQL and it worked - set DTUs to 10 by default (this might take a loooong time):
ALTER DATABASE [mydb_name] MODIFY(EDITION='Standard' , SERVICE_OBJECTIVE='S0')
Reference: https://www.c-sharpcorner.com/blogs/change-the-azure-sql-tier-using-sql-query

Tridion OutboundEmail - Contact synchronization from several Presentation servers?

I'm facing a problem with OutboundEmail Synchronization for Contacts.
We have the following scenario : 2 load-balanced CMS servers and 3 load-balanced CDE web servers located in different data centers.
Each CDE web server will have it's own SQL server for broker DB and OutboundEmail Subscription + Tracking DB.
If I install local OutboundEmail Subscription + Tracking DB on each CDE, how can I process the Contacts Synchronization from the 3 CDE servers, knowing that for a specific Tridion publication you can only specify 1 synchronization target containing 1 url to profilesync.aspx ?
And idem for Tracking Synchronization.
I must be missing something ...
Any suggestion please?
This scenario is currently not supported, we do support multiple presentation servers but as you mentioned you can only specify one synchronization target under a publication
without going into detail there were compelling reasons not to support this scenario at that point in time, but it is on our backlog
I can think of a couple of options to solve it in this version:
use one database, but i'm guessing the reason to split it up over 3 data-centers is for fail-over/redunancy and/or geographical reasons
setup synchronization/tracking on one server and replicate data to the other 2 databases, note that the replication needs to be bi-directional