i need to change memory buffer variable in pg_filedump, this variable is hard-coded, thus I need to compile it from source. I have the source downloaded to ~\home\kali\pg_filedump from github. I made modification in decode.c at line 244
static char decompress_tmp_buff[64 * 1024];
to
static char decompress_tmp_buff[128 * 1024];
When I try to compile i get error, because I need to include sorceress for entire postgresql.
As the git compiling instruction states:
To compile pg_filedump, you will need to have a properly configured
PostgreSQL source tree or complete install tree (with include files)
of the appropriate PostgreSQL major version.
I compile from source first time in my life, there is to mutch assumptions in that instruction. What is the "a properly configured PostgreSQL source tree"?
I have downloaded the postgesql source code from git to /home/kali/postgeslq but how to point compiler in \home\kali\pg_filedump where the source fro postgres is?
Related
Prolog: I am using STM32 CubeIDE to develop embedded application for STM32 Microcontrollers, like the F1 Series, the F4 Series, the G0 series and some others with C.
What happened:
Today morning the automatic update feature suggested me to update to STM CubeID Version 1.9.0 and i accepted. After the updater had finished, I opened my current project and changed one variable in a typedef struct and hit the "build" button. All of a sudden the linker reported lots of "multiple definition" and "first defined here" errors. This project was compiling perfectly without any issues yesterday with CubeIDE Version 1.8
After searching an hour or two, where I could have missed a semicolon or something in that direction, which could mess up the whole code, I came to the conclusion, that the upgrade from CubeIDE 1.8.0 to 1.9.0 might be the root cause for this errors.
So I decided to uninstall CubeIDE 1.9.0 and reinstall Version 1.8.0, rolled back the project to the last working version from yesterday evening (compiled with 1.8.0), made the same changes, and Voila! - anything worked well again.
For me it looks like STM messed something up with the linker. Can anyone confirm this behavior, or was only me affected?
This is due to compiler update. From the release notes of STM32CubeIDE:
GCC 10 support by default
From GCC 10 release notes:
GCC now defaults to -fno-common. As a result, global variable accesses
are more efficient on various targets. In C, global variables with
multiple tentative definitions now result in linker errors. With
-fcommon such definitions are silently merged during linking.
This page has futher explanation and a workaround:
A common mistake in C is omitting extern when declaring a global
variable in a header file. If the header is included by several files
it results in multiple definitions of the same variable. In previous
GCC versions this error is ignored. GCC 10 defaults to -fno-common,
which means a linker error will now be reported. To fix this, use
extern in header files when declaring global variables, and ensure
each global is defined in exactly one C file. If tentative definitions
of particular variables need to be placed in a common block,
__attribute__((__common__)) can be used to force that behavior even in code compiled without -fcommon. As a workaround, legacy C code where
all tentative definitions should be placed into a common block can be
compiled with -fcommon.
In Project > Properties > C/C++ build > Settings > gcc compiler > miscellaneous > Other flags, try adding -fcommon as shown below to avoid the 1k+ linker error issue with STM CubeIDE 1.9.
I've recently switched to using PlatformIO for developing for STM32 using the following workflow:
Create a .ioc file using the CubeMX utility
Generate source code and the PlatformIO configuration from that .ioc file from the stm32pio command line utility
Edit, build, and debug using the PlatformIO plug-in for VSCode (Mac)
Overall, this works very well. However, I was previously using the CubeMX code generation in ST's CubeMX IDE, which placed a .s file in the source directory that (as I understand it) defined the NVIC, as well as the default function that was used for exceptions/interrupts that are not explicitly defined (i.e., those handled by their default weak implementations.) I don't see where this is defined in the new workflow. Is this generated dynamically as part of the build process?
The reason I'm asking is (beside wanting a better understanding of the process overall), I'd like to write an interrupt handler for EXTI0, but trigger it as a software interrupt, and not assign a pin to it. If that is not possible, then perhaps the entire point is moot.
I was able to find the answer. These steps might be useful to somebody else who comes across this question. This was done on MacOS, but should be similar to the process for other operating systems.
During the build process, the filename can be seen. It will be prefaced with startup_, followed by the name of the particular chip you're developing for. In my case, the line is
Compiling .pio/build/disco_f072rb/FrameworkCMSISDevice/gcc/startup_stm32f072xb.o
Searching in the .platformio folder of my user directory, I found the matching .s file, which in my case was .platformio/packages/framework-stm32cube/f0/Drivers/CMSIS/Device/ST/STM32F0xx/Source/Templates/gcc/startup_stm32f072xb.s
The structure of the path leading to the file indicates the particulars of the hardware and frameworks I'm using: STM32Cube framework, a F0 series chip, and the GCC compiler. The easiest way to find this file, and how I was able to find it, is using the find command from the terminal to search the PlatformIO directory.
Reading this file gives the lines I was looking for, defining the names of the functions to be used for exception and interrupt handling, such as the following:
.weak EXTI0_1_IRQHandler
.thumb_set EXTI0_1_IRQHandler,Default_Handler
It seems like, while I am using the CubeMX HAL for some drivers, the basic startup code is done using CMSIS, so it should be the same for HAL, LL, or CMSIS based builds.
Bringing a client old win forms application into our development ecosystem. Part of the effort is putting their db under source control with a database project.
When I build locally (VS 2017) - I get a bunch of warnings that some of the stored procs / functions reference objects that do not exists.
For Example:
Warning: SQL71502: Procedure: [dbo].[proc_WORKORDERDETAILUpdate] has an unresolved reference to object [dbo].[WORKORDERDETAIL].[WORKERORDERNUMBER].
In this particular scenario - the entire table is missing from the database.
The stored procs in the db are out of date. Obviously they are not being used. I am not trying to fix this whole mess now. Just get the app and db under source control.
When I check this in - the build definition is failing on the database project. The warnings I get locally are now treated as hard errors on the build server.
Error SQL71502: Procedure: [dbo].[proc_WORKORDERDETAILUpdate] has an unresolved reference to object [dbo].[WORKORDERDETAIL].[WORKERORDERNUMBER].
I got it. Rookie mistake.
I had the "Treat warnings as Errors" box unchecked for the debug configuration - hence why I was fine locally.
My release (build server) configuration is was not the same. It was not until I looked at the project.sqlproj xml file that saw the differences.
I leave this here just in case anyone else runs into a build server problem that cannot be replicated locally. Maybe it will give them a pointer.
I have installed pycparser that parses C code.
Using pycparser I want to parse an open source project, namely PostgreSQL(version-11.0). I have build it using Visual Studio Express 2017 compiler suite. However, during compilation it cannot find some header files, namely windows.h and winsock2.h.
While looking at the directory structure of the build PostgreSQL, I find that it does not have these header files. How to fix this issue?
Also a strange error occurred as:
postgresql/src/include/c.h:363:2: error: #error must have a working
64-bit integer datatype
Note: I am using Windows 10 64-bit platform and postgresql-11.0
The steps are as follows:
I downloaded visual studio 2017, Windows-10 SDK, Active Perl as described in the steps to build from source in PostgreSQL.
After this I open the developer command prompt of Visual Studio and navigate to the folder postgresql-11.0/src/tools/msvc
Use command "build" to build postgresql. The build process was successful, but still windows.h and winsock2.h was not found in directory structure of PostgreSQL.
I don't know pycparser, but your problem probably has two aspects to it:
You didn't give pycparser the correct list of include directories. The header files you mention are not part of PostgreSQL.
Maybe you can get the list from the environment of the Visual Studio prompt. I don't have a Windows here to verify that.
The error message means that neither HAVE_LONG_INT_64 nor HAVE_LONG_LONG_INT_64 are defined.
Now pg_config.h.win32, which is copied to pg_config.h during the MSVC install process, has the following:
#if (_MSC_VER > 1200)
#define HAVE_LONG_LONG_INT_64 1
#endif
Since you are not using MSVC, you probable don't have _MSC_VER set, which causes the error.
You could define _MSC_VER and see if you get to build then.
Essentially you are in a tight spot here, because pycparser is not a supported build procedure, so you'll have to dig into the source and fix things as you go. Without an understanding of the PostgreSQL source and the build process, you probably won't get far.
I have an already built linux kernel image (zImage) and I want to generate the source for it.
However I'm facing trouble in that: I understand major/minor numbers meaning but I couldn't compile the same version:
target version: 4.5.0-00183-g4647b69-dirty
I even don't know the meaning of "00183-g4647b69-dirty" and how apply it.
thank in advance.
NB: I've copied the config.gz from the target kernel but in vain.
Both my own image and the other one are cross-compiled
I think you have a CONFIG_LOCALVERSION_AUTO=y. This option gets the kernel name from the "git describe" output.
In your version 4.5.0-00183-g4647b69-dirty,
"4.5.0" means the kernel version tag v4.5.0,
"00183" means you have 183 commits on top of v4.5.0
"g4647b69" means your HEAD commit SHA1 is 4647b69. (g is a prefix)
"-dirty" means you have local changes not committed to git.
What you probably need is CONFIG_LOCALVERSION=. You will be able to recreate your builds with this.