Salesforce import fields #salesforce/schema/ - import

I am trying invoke account fields under the opportunity
Is there a any way to import account fields under the opportunity .I tried like that but it's not working
import OPP_CİTY from '#salesforce/schema/Opportunity.Account.BillingCity

Related

Insert value into Azure DevOps Custom Fields through Rest Api

I've added some custom fields for Test Plan in Azure DevOps. But when I try to create a new Test Plan through the Rest API call, it is only creating a Test Plan with default fields populated, while custom fields remain blank.
I've tried using both the field name (like Team Name) along with field reference name (like custom.TeamName) but to no avail. Even Get is also not exposing custom fields. Is some extra configuration required for custom fields, or it is a code related issue?
Details: I've created one Inherited process under the organization, and then under Process->Test Plan I've created new fields in the Test Plan, as shown in the screen shot:
I've tried below code to create Test Plan as Work Item and successfully created it with extra fields. But as I couldn't create a test suite independently, it is not behaving properly.
I've created a JsonPatchDocument and added all the fields (adding just one here) like below code:
JsonPatchDocument patchDocument= new JsonPatchDocument();
patchDocument.Add(
{
Operation=Operation.Add,
Path="/fields/System.TeamName",
Value="Xander"
}
);
VssConnection connection=new VssConnection(uri,credential);
WorkItemTrackingHttpClient workItemTrackingHttpClient= connection.GetClient<WorkItemTrackingHTTPClient>();
WorkItem res=workItemTrackingHTTPClient.CreateWorkItemAsync(patchDocument,project,"Test Plan").Result;
It is creating the Test Plan, but not Test Suite. So it is acting weirdly. Kindly check this.
Insert value into Azure DevOps Custom Fields through Rest Api
I could reproduce this issue with the REST API Test Plans - Create.
I think this should be related to the REST API, that because even if I use the REST API Test Plans - Get to get the custom field, but the request body does not include the custom field.
Since we could not get the custom field, we could not use the POST or PATCH for the custom filed.
You could add this request for this feature on our UserVoice site (https://developercommunity.visualstudio.com/content/idea/post.html?space=21 ), which is our main forum for product suggestions. Thank you for helping us build a better Azure DevOps.

How to use react admin AutoCompleteInput with remote data?

I'm using react admin to develop a new panel for my client. I want to use AutoCompleteInput. But all the examples I find in the docs and online are showing a simple static data that is defined in the same component.
I want to use AutoCompleteInput for a list of items that is retrieved from my API.
How can I do that?
You can use the AutocompleteInput inside a ReferenceInput as explained in the documentation: https://marmelab.com/react-admin/Inputs.html#autocompleteinput
import { AutocompleteInput, ReferenceInput } from 'react-admin';
<ReferenceInput label="Post" source="post_id" reference="posts">
<AutocompleteInput optionText="title" />
</ReferenceInput>
It means you have to declare the referenced resource using a Resource component in your admin.
If you want to fetch data directly from a remote source, then I suggest you use the Autocomplete from material-ui instead. React-Admin is not a UI library.

Import API not working in sisense

I was trying to use the dashboard import API from v1.0 which can be found in the REST API reference. I logged in to http://localhost:8083/dev/api/docs/#/ , gave the correct authorization token, and a dash file in the body, and a 24 character importFolder and hit the Run button to fire the API. It returns 201 as HTTP response, which means the request was successful. However, when I go back to the homepage, I don't see any new dashboard imported in to the said folder. I have tried both cases, where the importFolder exists (already created manually be me), and does not already exist, where I expect the API to create it for me. Neither of these, however, create/import the dashboard
A few comments that should help you resolve this:
When running the command from the interactive API reference (swagger) you don't need the authentication token, because you're already logged in with an active session.
Make sure the json of your dashboard is valid, by saving it as a .dash file and importing via the UI
The folder field is optional - if you leave the field blank, the dashboard is imported to the root of your navigation/folders panel.
If you'd like to import to a specific folder, you'll need to provide the folder ID, not its name, which can be found several ways such as using the /api/v1/folders endpoint, where you can provide a name filtering field and use the oid property of the returned object as the value for the folder field in the import endpoint.
If you can't get this to work still, use chrome's developer tools to look at the outgoing request when you import from the UI and compare the request (headers, body and path) to what you're doing via swagger in order to find the issue.

How to get user information of the user while running load tests in locust

I provide number of users=12 and hatch rate=2.
How can I get user id(s) of all users hitting my web page, as I would like to do some customizations based on the object names which are getting created (say an article title).
How to pass user information (say user id) while creating new articles. So that if I run a test with 12 users, I would know that articles were created by a certain user.
from locust import HttpLocust, TaskSet, task
def create_new_article(self):
with self.client.request('post',"/articles",data={"article[title]":"computer","article[content]":"pc"},catch_response=True) as response:
print response
How can I get user id(s) of all users hitting my web page?
This depends on how your web server is set up. What exactly is a user ID in your application's context?
I'll proceed with the assumption that you have some mechanism by which you can generate a user ID.
You could have your client side get the user ID(s) (using Javascript for example) and then pass each ID along to the server in a HTTP request where you could custom define a header to contain your user ID for that request.
For example, if you're using Flask/Python to handle all the business logic of your web application, then you might have some code like:
from flask import Flask, request
app = Flask(__name__)
#app.route("/articles")
def do_something_with_user_id():
do_something(request.headers.get("user-id"))
if __name__ == "__main__":
app.run()
How to pass user information (say user id) while creating new
articles?
You could change your POST request line in your Locust script to something like:
with self.client.request('post',"/articles",headers=header_with_user_id,data={"article[title]":"computer","article[content]":"pc"},catch_response=True) as response:
where header_with_user_id could be defined as follows:
header_with_user_id = { "user-id": <some user ID>}
where <some user ID> would be a stringified version of whatever your mechanism to obtain the user ID gets you.

Scraping data out of facebook using scrapy

The new graph search on facebook lets you search for current employees of a company using query token - Current Google employees (for example).
I want to scrape the results page (http://www.facebook.com/search/104958162837/employees/present) via scrapy.
Initial problem was facebook allows only a facebook user to access the information, so directing me to login.php. So, before scraping this url, I logged in via scrapy and then this result page. But even though the http response is 200 for this page, it does not scraps any data. The code is as follows:
import sys
from scrapy.spider import BaseSpider
from scrapy.http import FormRequest
from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from scrapy.http import Request
class DmozSpider(BaseSpider):
name = "test"
start_urls = ['https://www.facebook.com/login.php'];
task_urls = [query]
def parse(self, response):
return [FormRequest.from_response(response, formname='login_form',formdata={'email':'myemailid','pass':'myfbpassword'}, callback=self.after_login)]
def after_login(self,response):
if "authentication failed" in response.body:
self.log("Login failed",level=log.ERROR)
return
return Request(query, callback=self.page_parse)
def page_parse(self,response):
hxs = HtmlXPathSelector(response)
print hxs
items = hxs.select('//div[#class="_4_yl"]')
count = 0
print items
What could I have missed or done incorrectly?
The problem is that search results (specifically div initial_browse_result) are loaded dynamically via javascript. Scrapy receives the page before those actions, so there is no results yet there.
Basically, you have two options here:
try to simulate these js (XHR) requests in scrapy, see:
Scraping ajax pages using python
Can scrapy be used to scrape dynamic content from websites that are using AJAX?
use the combination of scrapy and selenium, or scrapy and mechanize to load the whole page with the content, see:
Executing Javascript Submit form functions using scrapy in python
this answer
If you go with first option, you should analyze all requests going during the page load and figure out which one is responsible for getting the data you want to scrape.
The second is pretty straightforward, but will definitely work - you just use other tool to get the page with loaded via js data, then parse it to scrapy items.
Hope that helps.