Roblox's datastore service wont setasync properly but it will always load perfectly. I dont understand why this is happening and it wont save in-game or on studio. The only way for me to save data reliably is to use the studio plugin "Datastore Editor".
local DataStoreService = game:GetService("DataStoreService")
local playerData = DataStoreService:GetDataStore("PlayerData")
local function onPlayerJoin(player)
local PlayerStats = script.PlayerStats:Clone()
PlayerStats.Parent = player
local playerUserId = 'Player_'..player.UserId
local wins = playerData:GetAsync(playerUserId.."-Wins")
local jump = playerData:GetAsync(playerUserId.."-Jump")
print("Player"..player.Name.." has "..wins.." wins and "..jump.." jump power")
if wins then
PlayerStats.Wins.Value = wins
print("Loaded wins for "..playerUserId.." ("..player.Name..")")
else
print("Failed to load wins for "..playerUserId.." ("..player.Name..")")
end
if jump then
PlayerStats.Jump.Value = jump
print("Loaded jump for "..playerUserId.." ("..player.Name..")")
else
print("Failed to load jump for "..playerUserId.." ("..player.Name..")")
end
while wait(10) do
local success, err = pcall(function()
local playerUserId = "Player_"..player.UserId
playerData:SetAsync(playerUserId.."-Wins", player.PlayerStats.Wins.Value)
playerData:SetAsync(playerUserId.."-Jump", player.PlayerStats.Jump.Value)
print("Saved data for "..playerUserId.." ("..player.Name..")")
end)
if not success then
warn("Failed to save data for "..player.UserId.." ("..player.Name..")")
end
end
end
local function onPlayerExit(player)
local success, err = pcall(function()
local playerUserId = "Player_"..player.UserId
playerData:SetAsync(playerUserId.."-Wins", player.PlayerStats.Wins.Value)
playerData:SetAsync(playerUserId.."-Jump", player.PlayerStats.Jump.Value)
print("Saved data for "..playerUserId.." ("..player.Name..")")
end)
if not success then
warn("Failed to save data for "..player.UserId.." ("..player.Name..")")
end
end
game.Players.PlayerAdded:Connect(onPlayerJoin)
game.Players.PlayerRemoving:Connect(onPlayerExit)
My leaderstats creating code may look different to usual because i just clone the folder with everything inside from the script.
I expect it to work properly saving and loading data but it only loads and i dont understand why
Have you enabled this option?
https://i.stack.imgur.com/I2fg4.png
Using datastores is kinda buggy if your running the game in the studio.
Related
I am using PostgreSql & golang in my project
I have created a postgresql background worker using golang. This background worker listens to LISTEN/NOTIFY channel in postgres and writes the data it receives to a file.
The trigger(which supplies data to channel), background worker registration and the background worker's main function are all in one .so file. The core background process logic is in another helper .so file. The background process main loads the helper .so file using dlopen and executes the core logic from it.
Issue faced:
It works fine in windows, but in linux where I tried with postgres 13.6, there is a problems.
It runs as intended for a while, but after about 2-3hrs, postgresql postmaster's CPU utilization shoots up and seems to get stuck. The CPU shoot up is very sudden (less than 5 mins, till it starts shooting up it is normal).
I am not able to establish connection with postgres from psql client.
In the log the following message keeps repeating:
WARNING: worker took too long to start; canceled.
I tried commenting out various areas of my code, adding sleeps at different places of the core processing loop and even disabling the trigger but the issue occurs.
Softwares and libraries used:
go version go1.17.5 linux/amd64
Postgres 13.6, in Ubuntu
LibPq: https://github.com/lib/pq,
My core processing loop looks like this:
maxReconn := time.Minute
listener := pq.NewListener(<connectionstring>, minReconn, maxReconn, EventCallBackFn); // libpq used here.
defer listener.UnlistenAll();
if err = listener.Listen("mystream"); err != nil {
panic(err);
}
var itemsProcessedSinceLastSleep int = 0;
for {
select {
case signal := <-signalChan:
PgLog(PG_LOG, "Exiting loop due to termination signal : %d", signal);
return 1;
case pgstatus := <-pmStatusChan:
PgLog(PG_LOG, "Exiting loop as postmaster is not running : %d ", pgstatus);
return 1;
case data := <-listener.Notify:
itemsProcessedSinceLastSleep = itemsProcessedSinceLastSleep + 1;
if itemsProcessedSinceLastSleep >= 1000 {
time.Sleep(time.Millisecond * 10);
itemsProcessedSinceLastSleep = 0;
}
ProcessChangeReceivedAtStream(data); // This performs the data processing
case <-time.After(10 * time.Second):
time.Sleep(100 * time.Millisecond);
var cEpoch = time.Now().Unix();
if cEpoch - lastConnChkTime > 1800 {
lastConnChkTime = cEpoch;
if err := listener.Ping(); err!=nil {
PgLog(PG_LOG, "Seems to be a problem with connection")
}
}
default:
time.Sleep(time.Millisecond * 100);
}
}```
For testing purposes, I created a specific node on my Firebase database. I copy a user over to that node and then can futz with it without worrying about corrupting data or ruining a user's info. It works really well for my purposes.
I've run into a problem, however. If a user has an extremely large set of data, the copy function won't work. It just stalls. I don't get any errors, though. I read that Firebase has copy limits of 1MB, and I'm guessing that's the problem. I'm running up against that wall, I think.
Here is my code:
func copyToTestingNode() {
let start = Date()
// 1 . create copy of user and then modify the copy
guard var copiedUser = user else { print("copied user error"); return }
copiedUser.userID = MP.adminID
copiedUser.householdInfo.subscriptionExpiryDate = 2500000000
// 2. get a snapshot of the copied user's info
ref.child(user.userID).observeSingleEvent(of: .value) { (userSnapshot) in
print("Step 2 TRT:", Date().timeIntervalSince(start))
// 3. remove any existing data at admin node, and then...
self.ref.child(MP.adminID).removeValue { (error, dbRef) in
print("Step 3 TRT:", Date().timeIntervalSince(start))
// 4. ...copy the new user info to the admin node
self.ref.child(MP.adminID).setValue(userSnapshot.value, withCompletionBlock: { (error, adminRef) in
print("Step 4 TRT:", Date().timeIntervalSince(start))
// 5. then send user alert and stop activity indicator
self.activityIndicator.stopAnimating()
self.showSimpleAlert(alertTitle: "Copy Complete", alertMessage: "Your copy of \(copiedUser.householdInfo.userName) is complete and can be found under the new node:\n\n\(copiedUser.householdInfo.userName) Family")
})
}
}
}
Options:
Is there a simple way to check the size of the DataSnapshot to alert me that the dataset is too large to copy over?
Is there a simple way to split up the snapshot into smaller pieces and overcome the 1MB limit that way?
Should I use Cloud Functions instead of trying to trigger this on a device?
Is there a way to somehow "compress" the snapshot to be smaller so that I can copy it easier?
I'm open to suggestions.
UPDATE #1
I read about the size limitation HERE. Judging from Frank's reaction, I'm guessing my understanding of that limitation is wrong.
I downloaded the node from the Firebase console and checked its size. It's 799 KB on my hard drive. It's a large JSON tree, and so I thought that its size must be the reason why it won't copy over. The smaller nodes copy over no problem. Just the large ones have trouble.
UPDATE #2
I'm not sure how to show the actual data, other than a screenshot, seeing how large the JSON tree is. So here is a screenshot:
As you can see, the data has multiple nodes, some of which are larger than others. I suppose I can cut down the 'Job Jar' node, but the rest really need to be that size for everything to work properly.
Granted, this is one of the largest datasets I have among all my users, but the structure doesn't change.
As for the speed of execution for each line of code, here are the simulator times for each numbered step:
Step 2 TRT: 0.5278879404067993
Step 3 TRT: 0.6249579191207886
Step 4 TRT: 1.8466829061508179
ALL DONE COPYING!!
This only works for the smaller datasets. For the larger ones, I never get to step 4. It just hangs. I let it run for several minutes, but no change.
Final version that seems to work:
func copyToTestingNode() {
// 1 . create copy of user and then modify the copy
guard var copiedUser = user else { print("copied user error"); return }
let adminRef = ref.child(MP.adminID)
copiedUser.userID = MP.adminID
copiedUser.householdInfo.subscriptionExpiryDate = 2500000000
// 2. get a snapshot of the copied user's info
ref.child(user.userID).observeSingleEvent(of: .value) { (userSnapshot) in
// 3. remove any existing data at admin node, and then...
adminRef.removeValue { (error, dbRef) in
if (error != nil) { print("Yikes!") }
// 4. ...copy the new user info to the admin node one node at a time (if user has a lot of data)
var totalNodesCopied = 0
for item in userSnapshot.children {
guard let snap = item as? DataSnapshot else { print("snap error"); return }
self.ref.child(MP.adminID).child(snap.key).setValue(snap.value) { (error, adminRef) in
totalNodesCopied += 1
if totalNodesCopied == userSnapshot.childrenCount {
print("ALL DONE COPYING!!")
}
}
}
}
}
}
I am having a java agent which loops through the view and gets the attachment from each document, The attachment is nothing but the .dxl file containing the document xml data. I am extracting the file at some temp directory and trying import the extracted .dxl as soon as it get extracted.
But the problem here is ,it only imports or works on first document's attachment in the loop and throws the error in java debug console
NotesException: DXL importer operation failed
at lotus.domino.local.DxlImporter.importDxl(Unknown Source)
at JavaAgent.NotesMain(Unknown Source)
at lotus.domino.AgentBase.runNotes(Unknown Source)
at lotus.domino.NotesThread.run(Unknown Source)
My java Agent code is
public class JavaAgent extends AgentBase {
static DxlImporter importer = null;
public void NotesMain() {
try {
Session session = getSession();
AgentContext agentContext = session.getAgentContext();
// (Your code goes here)
// Get current database
Database db = agentContext.getCurrentDatabase();
View v = db.getView("DXLProcessing_mails");
DocumentCollection dxl_tranfered_mail = v.getAllDocumentsByKey("dxl_tranfered_mail");
Document dxlDoc = dxl_tranfered_mail.getFirstDocument();
while(dxlDoc!=null){
RichTextItem rt = (RichTextItem) dxlDoc.getFirstItem("body");
Vector allObjects= rt.getEmbeddedObjects();
System.out.println("File name is "+ allObjects.get(0));
EmbeddedObject eo = dxlDoc.getAttachment(allObjects.get(0).toString());
if(eo.getFileSize()>0){
eo.extractFile(System.getProperty("java.io.tmpdir") + eo.getName());
System.out.println("Extracted File to "+System.getProperty("java.io.tmpdir") + eo.getName());
String filePath = System.getProperty("java.io.tmpdir") + eo.getName();
Stream stream = session.createStream();
if (stream.open(filePath) & (stream.getBytes() >0)) {
System.out.println("In If"+System.getProperty("java.io.tmpdir"));
importer = session.createDxlImporter();
importer.setDocumentImportOption(DxlImporter.DXLIMPORTOPTION_CREATE);
System.out.println("Break Point");
importer.importDxl(stream,db);
System.out.println("Imported Sucessfully");
}else{
System.out.println("In else"+stream.getBytes());
}
}
dxlDoc = dxl_tranfered_mail.getNextDocument();
}
} catch(Exception e) {
e.printStackTrace();
}
The code executes till it prints "Break Point" and throws the error but the attachment get imported for first time
In other case if i hard code the filePath for the specific dxl file from file system it imports the dxl as document in the database with no errors
I am wondering if it is the issue of the stream passed doesn't get completes and the next loop executes.
Any kind of suggestion will be helpful.
I can't see any part where your while loop would move on from the first document.
Usually you would have something like:
Document nextDoc = dxl_tranfered_mail.getNextDocument(dxlDoc);
dxlDoc.recycle();
dxlDoc = nextDoc;
Near the end of the loop to advance it to the next document. As your code currently stands it looks like it would never advance, and always be on the first document.
If you do not know about the need to 'recycle' domino objects I suggest you have a search for some blog posts articles that explain the need to do so.
It is a little complicated but basically, the Java Objects are just a 'wrapper' for the the objects in the C API.
Whenever you create a Domino Object (such as a Document, View, DocumentCollection etc.) a memory handle is allocated in the underlying 'C' layer. This needs to be released (or recycled) and it will eventually do so when the session is recycled, however when your are processing in a loop it is much more important to recycle as you can easily exhaust the available memory handles and cause a crash.
Also it's possible you may need to close (and recycle) each Stream after you a finished importing each file
Lastly, double check that the extracted file that is causing an exception is definitely a valid DXL file, it could simply be that some of the attachments are not valid DXL and will always throw an exception.
you could put a try/catch within the loop to handle that scenario (and report the problem files), which will allow the agent to continue without halting
I have the following code
conn = net.createConnection(net.TCP, 0)
conn:on("sent", function(sck,c)
print("Sent")
sck:close()
end)
conn:on("connection", function(sck,c)
print("Connected..")
sck:send("test")
end)
conn:connect(9090, "192.168.1.89")
print("Send data.")
This works fine when run as a snippet in ESPlorer, IE run live interpreter. I see the output "Connected.." and "Sent", and the message appears on the sever. When it is part of either the init.lua, or my mcu-temp.lua I don't even see the "Connected.." message.
The connection to WIFI is OK, and the board isn't reset between trying it "live" and from the file. I'm really stuck as to why it works OK one way and not the other.
The connection to WIFI is OK
I seriously doubt that. If you run from ESPlorer then yes, but not when you reboot the device.
Connecting to an AP takes a few seconds normally. You need to wait until it's connected until you can continue with the startup sequence. Remember: with NodeMCU most operations are asynchronous and event-driven, wifi.sta.connect() does NOT block.
Here's a startup sequence I borrowed, and adapted, from https://cknodemcu.wordpress.com/.
SSID = <tbd>
PASSWORD = <tbd>
function startup()
local conn = net.createConnection(net.TCP, 0)
conn:on("sent", function(sck, c)
print("Sent")
sck:close()
end)
conn:on("connection", function(sck, c)
print("Connected..")
sck:send("test")
end)
conn:connect(9090, "192.168.1.89")
print("Sent data.")
end
print("setting up WiFi")
wifi.setmode(wifi.STATION)
wifi.sta.config(SSID,PASSWORD)
wifi.sta.connect()
tmr.alarm(1, 1000, 1, function()
if wifi.sta.getip() == nil then
print("IP unavaiable, Waiting...")
else
tmr.stop(1)
print("Config done, IP is "..wifi.sta.getip())
print("You have 5 seconds to abort startup")
print("Waiting...")
tmr.alarm(0, 5000, 0, startup)
end
end)
Just two days ago I answered nearly the same question here on SO. See https://stackoverflow.com/a/37495955/131929 for an alternative solution.
Here's the code:
ALint cProcessedBuffers = 0;
ALenum alError = AL_NO_ERROR;
alGetSourcei(m_OpenALSourceId, AL_BUFFERS_PROCESSED, &cProcessedBuffers);
if((alError = alGetError()) != AL_NO_ERROR)
{
throw "AudioClip::ProcessPlayedBuffers - error returned from alGetSroucei()";
}
alError = AL_NO_ERROR;
if (cProcessedBuffers > 0)
{
alSourceUnqueueBuffers(m_OpenALSourceId, cProcessedBuffers, arrBuffers);
if((alError = alGetError()) != AL_NO_ERROR)
{
throw "AudioClip::ProcessPlayedBuffers - error returned from alSourceUnqueueBuffers()";
}
}
The call to alGetSourcei returns with cProcessedBuffers > 0, but the following call to alSourceUnqueueBuffers fails with an INVALID_OPERATION. This in an erratic error that does not always occur. The program containing this sample code is a single-threaded app running in a tight loop (typically would be sync'ed with a display loop, but in this case I'm not using a timed callback of any sort).
Try alSourceStop(m_OpenALSourceId) first.
Then alUnqueueBuffers(), and after that, Restart playing by alSourcePlay(m_OpenALSourceId).
I solved the same problem by this way. But I don't know why have to do so in
Mentioned in this SO thread,
If you have AL_LOOPING enabled on a streaming source the unqueue operation will fail.
The looping flag has some sort of lock on the buffers when enabled. The answer by #MyMiracle hints at this as well, stopping the sound releases that hold, but it's not necessary..
AL_LOOPING is not meant to be set on a streaming source, as you manage the source data in the queue. Keep queuing, it will keep playing. Queue from the beginning of the data, it will loop.